00:00:00.001 Started by upstream project "autotest-per-patch" build number 132567 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.024 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:02.838 The recommended git tool is: git 00:00:02.838 using credential 00000000-0000-0000-0000-000000000002 00:00:02.840 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:02.850 Fetching changes from the remote Git repository 00:00:02.854 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:02.865 Using shallow fetch with depth 1 00:00:02.865 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:02.865 > git --version # timeout=10 00:00:02.875 > git --version # 'git version 2.39.2' 00:00:02.875 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:02.886 Setting http proxy: proxy-dmz.intel.com:911 00:00:02.886 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:09.556 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:09.568 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:09.580 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:09.580 > git config core.sparsecheckout # timeout=10 00:00:09.594 > git read-tree -mu HEAD # timeout=10 00:00:09.610 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:09.634 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:09.634 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:09.725 [Pipeline] Start of Pipeline 00:00:09.740 [Pipeline] library 00:00:09.742 Loading library shm_lib@master 00:00:09.742 Library shm_lib@master is cached. Copying from home. 00:00:09.754 [Pipeline] node 00:00:09.764 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:09.766 [Pipeline] { 00:00:09.775 [Pipeline] catchError 00:00:09.776 [Pipeline] { 00:00:09.788 [Pipeline] wrap 00:00:09.796 [Pipeline] { 00:00:09.802 [Pipeline] stage 00:00:09.804 [Pipeline] { (Prologue) 00:00:09.821 [Pipeline] echo 00:00:09.822 Node: VM-host-SM16 00:00:09.829 [Pipeline] cleanWs 00:00:09.838 [WS-CLEANUP] Deleting project workspace... 00:00:09.838 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.844 [WS-CLEANUP] done 00:00:10.118 [Pipeline] setCustomBuildProperty 00:00:10.215 [Pipeline] httpRequest 00:00:10.573 [Pipeline] echo 00:00:10.575 Sorcerer 10.211.164.20 is alive 00:00:10.583 [Pipeline] retry 00:00:10.585 [Pipeline] { 00:00:10.598 [Pipeline] httpRequest 00:00:10.602 HttpMethod: GET 00:00:10.602 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.603 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.609 Response Code: HTTP/1.1 200 OK 00:00:10.610 Success: Status code 200 is in the accepted range: 200,404 00:00:10.610 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.505 [Pipeline] } 00:00:15.519 [Pipeline] // retry 00:00:15.532 [Pipeline] sh 00:00:15.812 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.828 [Pipeline] httpRequest 00:00:17.523 [Pipeline] echo 00:00:17.524 Sorcerer 10.211.164.20 is alive 00:00:17.532 [Pipeline] retry 00:00:17.534 [Pipeline] { 00:00:17.544 [Pipeline] httpRequest 00:00:17.549 HttpMethod: GET 00:00:17.550 URL: http://10.211.164.20/packages/spdk_345c51d49514ce7f12fb226eed2467af150d8a03.tar.gz 00:00:17.550 Sending request to url: http://10.211.164.20/packages/spdk_345c51d49514ce7f12fb226eed2467af150d8a03.tar.gz 00:00:17.572 Response Code: HTTP/1.1 200 OK 00:00:17.573 Success: Status code 200 is in the accepted range: 200,404 00:00:17.573 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_345c51d49514ce7f12fb226eed2467af150d8a03.tar.gz 00:04:59.283 [Pipeline] } 00:04:59.300 [Pipeline] // retry 00:04:59.308 [Pipeline] sh 00:04:59.589 + tar --no-same-owner -xf spdk_345c51d49514ce7f12fb226eed2467af150d8a03.tar.gz 00:05:02.888 [Pipeline] sh 00:05:03.174 + git -C spdk log --oneline -n5 00:05:03.174 345c51d49 nvmf/tcp: remove await_req TAILQ 00:05:03.174 e286d3c2f nvmf/tcp: add nvmf_tcp_qpair_process() helper function 00:05:03.174 e9dea99c0 nvmf/tcp: simplify nvmf_tcp_poll_group_poll event counting 00:05:03.174 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:05:03.174 5592070b3 doc: update nvmf_tracing.md 00:05:03.193 [Pipeline] writeFile 00:05:03.207 [Pipeline] sh 00:05:03.558 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:05:03.570 [Pipeline] sh 00:05:03.849 + cat autorun-spdk.conf 00:05:03.849 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:03.849 SPDK_TEST_NVMF=1 00:05:03.849 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:03.849 SPDK_TEST_URING=1 00:05:03.849 SPDK_TEST_USDT=1 00:05:03.849 SPDK_RUN_UBSAN=1 00:05:03.849 NET_TYPE=virt 00:05:03.849 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:03.856 RUN_NIGHTLY=0 00:05:03.857 [Pipeline] } 00:05:03.870 [Pipeline] // stage 00:05:03.883 [Pipeline] stage 00:05:03.885 [Pipeline] { (Run VM) 00:05:03.896 [Pipeline] sh 00:05:04.178 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:05:04.178 + echo 'Start stage prepare_nvme.sh' 00:05:04.178 Start stage prepare_nvme.sh 00:05:04.178 + [[ -n 0 ]] 00:05:04.178 + disk_prefix=ex0 00:05:04.178 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:05:04.178 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:05:04.178 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:05:04.178 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:04.178 ++ SPDK_TEST_NVMF=1 00:05:04.178 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:04.178 ++ SPDK_TEST_URING=1 00:05:04.178 ++ SPDK_TEST_USDT=1 00:05:04.178 ++ SPDK_RUN_UBSAN=1 00:05:04.178 ++ NET_TYPE=virt 00:05:04.178 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:04.178 ++ RUN_NIGHTLY=0 00:05:04.178 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:05:04.178 + nvme_files=() 00:05:04.178 + declare -A nvme_files 00:05:04.178 + backend_dir=/var/lib/libvirt/images/backends 00:05:04.178 + nvme_files['nvme.img']=5G 00:05:04.178 + nvme_files['nvme-cmb.img']=5G 00:05:04.178 + nvme_files['nvme-multi0.img']=4G 00:05:04.178 + nvme_files['nvme-multi1.img']=4G 00:05:04.178 + nvme_files['nvme-multi2.img']=4G 00:05:04.178 + nvme_files['nvme-openstack.img']=8G 00:05:04.178 + nvme_files['nvme-zns.img']=5G 00:05:04.178 + (( SPDK_TEST_NVME_PMR == 1 )) 00:05:04.178 + (( SPDK_TEST_FTL == 1 )) 00:05:04.178 + (( SPDK_TEST_NVME_FDP == 1 )) 00:05:04.178 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:05:04.178 + for nvme in "${!nvme_files[@]}" 00:05:04.178 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:05:04.178 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:05:04.178 + for nvme in "${!nvme_files[@]}" 00:05:04.178 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:05:05.142 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:05:05.142 + for nvme in "${!nvme_files[@]}" 00:05:05.142 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:05:05.142 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:05:05.142 + for nvme in "${!nvme_files[@]}" 00:05:05.142 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:05:05.142 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:05:05.142 + for nvme in "${!nvme_files[@]}" 00:05:05.142 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:05:05.142 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:05:05.142 + for nvme in "${!nvme_files[@]}" 00:05:05.142 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:05:05.142 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:05:05.142 + for nvme in "${!nvme_files[@]}" 00:05:05.142 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:05:06.075 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:05:06.075 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:05:06.075 + echo 'End stage prepare_nvme.sh' 00:05:06.075 End stage prepare_nvme.sh 00:05:06.085 [Pipeline] sh 00:05:06.362 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:05:06.362 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:05:06.362 00:05:06.362 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:05:06.362 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:05:06.362 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:05:06.362 HELP=0 00:05:06.362 DRY_RUN=0 00:05:06.362 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:05:06.362 NVME_DISKS_TYPE=nvme,nvme, 00:05:06.362 NVME_AUTO_CREATE=0 00:05:06.362 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:05:06.362 NVME_CMB=,, 00:05:06.362 NVME_PMR=,, 00:05:06.362 NVME_ZNS=,, 00:05:06.362 NVME_MS=,, 00:05:06.362 NVME_FDP=,, 00:05:06.362 SPDK_VAGRANT_DISTRO=fedora39 00:05:06.362 SPDK_VAGRANT_VMCPU=10 00:05:06.362 SPDK_VAGRANT_VMRAM=12288 00:05:06.362 SPDK_VAGRANT_PROVIDER=libvirt 00:05:06.362 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:05:06.362 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:05:06.362 SPDK_OPENSTACK_NETWORK=0 00:05:06.362 VAGRANT_PACKAGE_BOX=0 00:05:06.362 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:05:06.362 FORCE_DISTRO=true 00:05:06.362 VAGRANT_BOX_VERSION= 00:05:06.362 EXTRA_VAGRANTFILES= 00:05:06.362 NIC_MODEL=e1000 00:05:06.362 00:05:06.362 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:05:06.362 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:05:10.556 Bringing machine 'default' up with 'libvirt' provider... 00:05:10.814 ==> default: Creating image (snapshot of base box volume). 00:05:11.073 ==> default: Creating domain with the following settings... 00:05:11.073 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732687215_556977d627a9929eff6b 00:05:11.073 ==> default: -- Domain type: kvm 00:05:11.073 ==> default: -- Cpus: 10 00:05:11.073 ==> default: -- Feature: acpi 00:05:11.073 ==> default: -- Feature: apic 00:05:11.073 ==> default: -- Feature: pae 00:05:11.073 ==> default: -- Memory: 12288M 00:05:11.073 ==> default: -- Memory Backing: hugepages: 00:05:11.073 ==> default: -- Management MAC: 00:05:11.073 ==> default: -- Loader: 00:05:11.073 ==> default: -- Nvram: 00:05:11.073 ==> default: -- Base box: spdk/fedora39 00:05:11.073 ==> default: -- Storage pool: default 00:05:11.073 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732687215_556977d627a9929eff6b.img (20G) 00:05:11.073 ==> default: -- Volume Cache: default 00:05:11.073 ==> default: -- Kernel: 00:05:11.073 ==> default: -- Initrd: 00:05:11.073 ==> default: -- Graphics Type: vnc 00:05:11.073 ==> default: -- Graphics Port: -1 00:05:11.073 ==> default: -- Graphics IP: 127.0.0.1 00:05:11.073 ==> default: -- Graphics Password: Not defined 00:05:11.073 ==> default: -- Video Type: cirrus 00:05:11.073 ==> default: -- Video VRAM: 9216 00:05:11.073 ==> default: -- Sound Type: 00:05:11.073 ==> default: -- Keymap: en-us 00:05:11.073 ==> default: -- TPM Path: 00:05:11.073 ==> default: -- INPUT: type=mouse, bus=ps2 00:05:11.073 ==> default: -- Command line args: 00:05:11.073 ==> default: -> value=-device, 00:05:11.073 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:05:11.073 ==> default: -> value=-drive, 00:05:11.073 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:05:11.073 ==> default: -> value=-device, 00:05:11.073 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:11.073 ==> default: -> value=-device, 00:05:11.073 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:05:11.073 ==> default: -> value=-drive, 00:05:11.073 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:05:11.073 ==> default: -> value=-device, 00:05:11.073 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:11.073 ==> default: -> value=-drive, 00:05:11.073 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:05:11.073 ==> default: -> value=-device, 00:05:11.073 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:11.073 ==> default: -> value=-drive, 00:05:11.073 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:05:11.073 ==> default: -> value=-device, 00:05:11.073 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:11.073 ==> default: Creating shared folders metadata... 00:05:11.073 ==> default: Starting domain. 00:05:12.972 ==> default: Waiting for domain to get an IP address... 00:05:39.507 ==> default: Waiting for SSH to become available... 00:05:39.507 ==> default: Configuring and enabling network interfaces... 00:05:43.707 default: SSH address: 192.168.121.34:22 00:05:43.707 default: SSH username: vagrant 00:05:43.707 default: SSH auth method: private key 00:05:45.628 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:05:53.745 ==> default: Mounting SSHFS shared folder... 00:05:55.121 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:05:55.121 ==> default: Checking Mount.. 00:05:56.500 ==> default: Folder Successfully Mounted! 00:05:56.500 ==> default: Running provisioner: file... 00:05:57.434 default: ~/.gitconfig => .gitconfig 00:05:57.692 00:05:57.692 SUCCESS! 00:05:57.692 00:05:57.692 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:05:57.692 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:05:57.692 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:05:57.692 00:05:57.700 [Pipeline] } 00:05:57.717 [Pipeline] // stage 00:05:57.729 [Pipeline] dir 00:05:57.730 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:05:57.732 [Pipeline] { 00:05:57.744 [Pipeline] catchError 00:05:57.746 [Pipeline] { 00:05:57.759 [Pipeline] sh 00:05:58.040 + vagrant ssh-config --host vagrant 00:05:58.040 + sed -ne /^Host/,$p 00:05:58.040 + tee ssh_conf 00:06:02.226 Host vagrant 00:06:02.226 HostName 192.168.121.34 00:06:02.226 User vagrant 00:06:02.226 Port 22 00:06:02.226 UserKnownHostsFile /dev/null 00:06:02.226 StrictHostKeyChecking no 00:06:02.226 PasswordAuthentication no 00:06:02.226 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:06:02.226 IdentitiesOnly yes 00:06:02.226 LogLevel FATAL 00:06:02.226 ForwardAgent yes 00:06:02.226 ForwardX11 yes 00:06:02.226 00:06:02.239 [Pipeline] withEnv 00:06:02.242 [Pipeline] { 00:06:02.256 [Pipeline] sh 00:06:02.536 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:06:02.536 source /etc/os-release 00:06:02.536 [[ -e /image.version ]] && img=$(< /image.version) 00:06:02.536 # Minimal, systemd-like check. 00:06:02.536 if [[ -e /.dockerenv ]]; then 00:06:02.536 # Clear garbage from the node's name: 00:06:02.536 # agt-er_autotest_547-896 -> autotest_547-896 00:06:02.536 # $HOSTNAME is the actual container id 00:06:02.536 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:06:02.536 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:06:02.536 # We can assume this is a mount from a host where container is running, 00:06:02.536 # so fetch its hostname to easily identify the target swarm worker. 00:06:02.536 container="$(< /etc/hostname) ($agent)" 00:06:02.536 else 00:06:02.536 # Fallback 00:06:02.536 container=$agent 00:06:02.536 fi 00:06:02.536 fi 00:06:02.536 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:06:02.536 00:06:02.803 [Pipeline] } 00:06:02.819 [Pipeline] // withEnv 00:06:02.827 [Pipeline] setCustomBuildProperty 00:06:02.842 [Pipeline] stage 00:06:02.844 [Pipeline] { (Tests) 00:06:02.862 [Pipeline] sh 00:06:03.138 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:06:03.409 [Pipeline] sh 00:06:03.687 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:06:03.957 [Pipeline] timeout 00:06:03.957 Timeout set to expire in 1 hr 0 min 00:06:03.959 [Pipeline] { 00:06:03.974 [Pipeline] sh 00:06:04.251 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:06:04.850 HEAD is now at 345c51d49 nvmf/tcp: remove await_req TAILQ 00:06:04.863 [Pipeline] sh 00:06:05.173 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:06:05.445 [Pipeline] sh 00:06:05.727 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:06:05.743 [Pipeline] sh 00:06:06.022 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:06:06.280 ++ readlink -f spdk_repo 00:06:06.280 + DIR_ROOT=/home/vagrant/spdk_repo 00:06:06.280 + [[ -n /home/vagrant/spdk_repo ]] 00:06:06.280 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:06:06.280 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:06:06.280 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:06:06.280 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:06:06.280 + [[ -d /home/vagrant/spdk_repo/output ]] 00:06:06.280 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:06:06.280 + cd /home/vagrant/spdk_repo 00:06:06.280 + source /etc/os-release 00:06:06.280 ++ NAME='Fedora Linux' 00:06:06.280 ++ VERSION='39 (Cloud Edition)' 00:06:06.280 ++ ID=fedora 00:06:06.280 ++ VERSION_ID=39 00:06:06.280 ++ VERSION_CODENAME= 00:06:06.280 ++ PLATFORM_ID=platform:f39 00:06:06.280 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:06.280 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:06.280 ++ LOGO=fedora-logo-icon 00:06:06.280 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:06.280 ++ HOME_URL=https://fedoraproject.org/ 00:06:06.280 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:06.280 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:06.280 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:06.280 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:06.280 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:06.280 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:06.280 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:06.280 ++ SUPPORT_END=2024-11-12 00:06:06.280 ++ VARIANT='Cloud Edition' 00:06:06.280 ++ VARIANT_ID=cloud 00:06:06.280 + uname -a 00:06:06.280 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:06.280 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:06.539 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:06.539 Hugepages 00:06:06.539 node hugesize free / total 00:06:06.539 node0 1048576kB 0 / 0 00:06:06.539 node0 2048kB 0 / 0 00:06:06.539 00:06:06.539 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:06.539 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:06.798 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:06.798 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:06.798 + rm -f /tmp/spdk-ld-path 00:06:06.798 + source autorun-spdk.conf 00:06:06.798 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:06.798 ++ SPDK_TEST_NVMF=1 00:06:06.798 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:06.798 ++ SPDK_TEST_URING=1 00:06:06.798 ++ SPDK_TEST_USDT=1 00:06:06.798 ++ SPDK_RUN_UBSAN=1 00:06:06.798 ++ NET_TYPE=virt 00:06:06.798 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:06.798 ++ RUN_NIGHTLY=0 00:06:06.798 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:06.798 + [[ -n '' ]] 00:06:06.798 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:06:06.798 + for M in /var/spdk/build-*-manifest.txt 00:06:06.798 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:06.798 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:06.798 + for M in /var/spdk/build-*-manifest.txt 00:06:06.798 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:06.798 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:06.798 + for M in /var/spdk/build-*-manifest.txt 00:06:06.798 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:06.798 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:06.798 ++ uname 00:06:06.798 + [[ Linux == \L\i\n\u\x ]] 00:06:06.798 + sudo dmesg -T 00:06:06.798 + sudo dmesg --clear 00:06:06.798 + dmesg_pid=5377 00:06:06.798 + sudo dmesg -Tw 00:06:06.798 + [[ Fedora Linux == FreeBSD ]] 00:06:06.798 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:06.798 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:06.798 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:06.798 + [[ -x /usr/src/fio-static/fio ]] 00:06:06.798 + export FIO_BIN=/usr/src/fio-static/fio 00:06:06.798 + FIO_BIN=/usr/src/fio-static/fio 00:06:06.798 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:06.798 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:06.798 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:06.798 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:06.798 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:06.798 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:06.798 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:06.798 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:06.798 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:06.798 06:01:11 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:06:06.798 06:01:11 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:06.798 06:01:11 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:06.798 06:01:11 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:06:06.798 06:01:11 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:06.798 06:01:11 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:06:06.798 06:01:11 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:06:06.798 06:01:11 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:06:06.798 06:01:11 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:06:06.798 06:01:11 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:06.798 06:01:11 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:06:06.798 06:01:11 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:06:06.798 06:01:11 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:07.057 06:01:11 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:06:07.057 06:01:11 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:07.057 06:01:11 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:07.057 06:01:11 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:07.057 06:01:11 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.057 06:01:11 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.057 06:01:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.057 06:01:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.057 06:01:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.057 06:01:11 -- paths/export.sh@5 -- $ export PATH 00:06:07.057 06:01:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.057 06:01:11 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:06:07.057 06:01:11 -- common/autobuild_common.sh@493 -- $ date +%s 00:06:07.057 06:01:11 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732687271.XXXXXX 00:06:07.057 06:01:11 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732687271.ParDkx 00:06:07.057 06:01:11 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:06:07.057 06:01:11 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:06:07.057 06:01:11 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:06:07.057 06:01:11 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:06:07.057 06:01:11 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:06:07.057 06:01:11 -- common/autobuild_common.sh@509 -- $ get_config_params 00:06:07.057 06:01:11 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:06:07.057 06:01:11 -- common/autotest_common.sh@10 -- $ set +x 00:06:07.057 06:01:11 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:06:07.057 06:01:11 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:06:07.057 06:01:11 -- pm/common@17 -- $ local monitor 00:06:07.057 06:01:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:07.057 06:01:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:07.057 06:01:11 -- pm/common@25 -- $ sleep 1 00:06:07.057 06:01:11 -- pm/common@21 -- $ date +%s 00:06:07.057 06:01:11 -- pm/common@21 -- $ date +%s 00:06:07.057 06:01:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732687271 00:06:07.057 06:01:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732687271 00:06:07.057 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732687271_collect-cpu-load.pm.log 00:06:07.057 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732687271_collect-vmstat.pm.log 00:06:07.991 06:01:12 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:06:07.991 06:01:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:07.991 06:01:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:07.991 06:01:12 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:06:07.991 06:01:12 -- spdk/autobuild.sh@16 -- $ date -u 00:06:07.991 Wed Nov 27 06:01:12 AM UTC 2024 00:06:07.991 06:01:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:07.991 v25.01-pre-274-g345c51d49 00:06:07.991 06:01:12 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:06:07.991 06:01:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:07.991 06:01:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:07.991 06:01:12 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:07.991 06:01:12 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:07.991 06:01:12 -- common/autotest_common.sh@10 -- $ set +x 00:06:07.991 ************************************ 00:06:07.991 START TEST ubsan 00:06:07.991 ************************************ 00:06:07.991 using ubsan 00:06:07.991 06:01:12 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:06:07.991 00:06:07.991 real 0m0.000s 00:06:07.991 user 0m0.000s 00:06:07.991 sys 0m0.000s 00:06:07.991 06:01:12 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:07.991 06:01:12 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:07.991 ************************************ 00:06:07.991 END TEST ubsan 00:06:07.991 ************************************ 00:06:07.991 06:01:13 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:07.991 06:01:13 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:07.991 06:01:13 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:07.991 06:01:13 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:07.991 06:01:13 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:07.991 06:01:13 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:07.991 06:01:13 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:07.991 06:01:13 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:07.991 06:01:13 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:06:08.249 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:08.249 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:08.507 Using 'verbs' RDMA provider 00:06:21.727 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:06:36.643 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:06:36.643 Creating mk/config.mk...done. 00:06:36.643 Creating mk/cc.flags.mk...done. 00:06:36.643 Type 'make' to build. 00:06:36.643 06:01:39 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:06:36.643 06:01:39 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:36.643 06:01:39 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:36.643 06:01:39 -- common/autotest_common.sh@10 -- $ set +x 00:06:36.643 ************************************ 00:06:36.643 START TEST make 00:06:36.643 ************************************ 00:06:36.643 06:01:39 make -- common/autotest_common.sh@1129 -- $ make -j10 00:06:36.643 make[1]: Nothing to be done for 'all'. 00:06:48.840 The Meson build system 00:06:48.840 Version: 1.5.0 00:06:48.840 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:06:48.840 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:06:48.840 Build type: native build 00:06:48.840 Program cat found: YES (/usr/bin/cat) 00:06:48.840 Project name: DPDK 00:06:48.840 Project version: 24.03.0 00:06:48.840 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:48.840 C linker for the host machine: cc ld.bfd 2.40-14 00:06:48.840 Host machine cpu family: x86_64 00:06:48.840 Host machine cpu: x86_64 00:06:48.840 Message: ## Building in Developer Mode ## 00:06:48.840 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:48.840 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:06:48.840 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:48.840 Program python3 found: YES (/usr/bin/python3) 00:06:48.840 Program cat found: YES (/usr/bin/cat) 00:06:48.840 Compiler for C supports arguments -march=native: YES 00:06:48.840 Checking for size of "void *" : 8 00:06:48.840 Checking for size of "void *" : 8 (cached) 00:06:48.840 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:06:48.840 Library m found: YES 00:06:48.840 Library numa found: YES 00:06:48.840 Has header "numaif.h" : YES 00:06:48.840 Library fdt found: NO 00:06:48.840 Library execinfo found: NO 00:06:48.840 Has header "execinfo.h" : YES 00:06:48.840 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:48.840 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:48.840 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:48.840 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:48.840 Run-time dependency openssl found: YES 3.1.1 00:06:48.840 Run-time dependency libpcap found: YES 1.10.4 00:06:48.840 Has header "pcap.h" with dependency libpcap: YES 00:06:48.840 Compiler for C supports arguments -Wcast-qual: YES 00:06:48.840 Compiler for C supports arguments -Wdeprecated: YES 00:06:48.840 Compiler for C supports arguments -Wformat: YES 00:06:48.840 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:48.840 Compiler for C supports arguments -Wformat-security: NO 00:06:48.840 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:48.840 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:48.840 Compiler for C supports arguments -Wnested-externs: YES 00:06:48.840 Compiler for C supports arguments -Wold-style-definition: YES 00:06:48.840 Compiler for C supports arguments -Wpointer-arith: YES 00:06:48.840 Compiler for C supports arguments -Wsign-compare: YES 00:06:48.840 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:48.840 Compiler for C supports arguments -Wundef: YES 00:06:48.840 Compiler for C supports arguments -Wwrite-strings: YES 00:06:48.840 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:48.840 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:48.840 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:48.840 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:48.840 Program objdump found: YES (/usr/bin/objdump) 00:06:48.840 Compiler for C supports arguments -mavx512f: YES 00:06:48.840 Checking if "AVX512 checking" compiles: YES 00:06:48.840 Fetching value of define "__SSE4_2__" : 1 00:06:48.840 Fetching value of define "__AES__" : 1 00:06:48.840 Fetching value of define "__AVX__" : 1 00:06:48.840 Fetching value of define "__AVX2__" : 1 00:06:48.840 Fetching value of define "__AVX512BW__" : (undefined) 00:06:48.840 Fetching value of define "__AVX512CD__" : (undefined) 00:06:48.840 Fetching value of define "__AVX512DQ__" : (undefined) 00:06:48.840 Fetching value of define "__AVX512F__" : (undefined) 00:06:48.840 Fetching value of define "__AVX512VL__" : (undefined) 00:06:48.840 Fetching value of define "__PCLMUL__" : 1 00:06:48.840 Fetching value of define "__RDRND__" : 1 00:06:48.840 Fetching value of define "__RDSEED__" : 1 00:06:48.840 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:06:48.840 Fetching value of define "__znver1__" : (undefined) 00:06:48.840 Fetching value of define "__znver2__" : (undefined) 00:06:48.840 Fetching value of define "__znver3__" : (undefined) 00:06:48.840 Fetching value of define "__znver4__" : (undefined) 00:06:48.840 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:48.840 Message: lib/log: Defining dependency "log" 00:06:48.840 Message: lib/kvargs: Defining dependency "kvargs" 00:06:48.840 Message: lib/telemetry: Defining dependency "telemetry" 00:06:48.840 Checking for function "getentropy" : NO 00:06:48.840 Message: lib/eal: Defining dependency "eal" 00:06:48.840 Message: lib/ring: Defining dependency "ring" 00:06:48.840 Message: lib/rcu: Defining dependency "rcu" 00:06:48.840 Message: lib/mempool: Defining dependency "mempool" 00:06:48.840 Message: lib/mbuf: Defining dependency "mbuf" 00:06:48.840 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:48.840 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:06:48.840 Compiler for C supports arguments -mpclmul: YES 00:06:48.840 Compiler for C supports arguments -maes: YES 00:06:48.840 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:48.840 Compiler for C supports arguments -mavx512bw: YES 00:06:48.840 Compiler for C supports arguments -mavx512dq: YES 00:06:48.840 Compiler for C supports arguments -mavx512vl: YES 00:06:48.840 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:48.840 Compiler for C supports arguments -mavx2: YES 00:06:48.840 Compiler for C supports arguments -mavx: YES 00:06:48.840 Message: lib/net: Defining dependency "net" 00:06:48.840 Message: lib/meter: Defining dependency "meter" 00:06:48.840 Message: lib/ethdev: Defining dependency "ethdev" 00:06:48.840 Message: lib/pci: Defining dependency "pci" 00:06:48.840 Message: lib/cmdline: Defining dependency "cmdline" 00:06:48.840 Message: lib/hash: Defining dependency "hash" 00:06:48.840 Message: lib/timer: Defining dependency "timer" 00:06:48.840 Message: lib/compressdev: Defining dependency "compressdev" 00:06:48.840 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:48.840 Message: lib/dmadev: Defining dependency "dmadev" 00:06:48.840 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:48.840 Message: lib/power: Defining dependency "power" 00:06:48.840 Message: lib/reorder: Defining dependency "reorder" 00:06:48.840 Message: lib/security: Defining dependency "security" 00:06:48.840 Has header "linux/userfaultfd.h" : YES 00:06:48.840 Has header "linux/vduse.h" : YES 00:06:48.840 Message: lib/vhost: Defining dependency "vhost" 00:06:48.840 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:48.840 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:48.840 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:48.840 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:48.840 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:48.840 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:48.840 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:48.840 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:48.840 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:48.840 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:48.840 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:48.840 Configuring doxy-api-html.conf using configuration 00:06:48.840 Configuring doxy-api-man.conf using configuration 00:06:48.840 Program mandb found: YES (/usr/bin/mandb) 00:06:48.840 Program sphinx-build found: NO 00:06:48.840 Configuring rte_build_config.h using configuration 00:06:48.840 Message: 00:06:48.840 ================= 00:06:48.841 Applications Enabled 00:06:48.841 ================= 00:06:48.841 00:06:48.841 apps: 00:06:48.841 00:06:48.841 00:06:48.841 Message: 00:06:48.841 ================= 00:06:48.841 Libraries Enabled 00:06:48.841 ================= 00:06:48.841 00:06:48.841 libs: 00:06:48.841 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:48.841 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:48.841 cryptodev, dmadev, power, reorder, security, vhost, 00:06:48.841 00:06:48.841 Message: 00:06:48.841 =============== 00:06:48.841 Drivers Enabled 00:06:48.841 =============== 00:06:48.841 00:06:48.841 common: 00:06:48.841 00:06:48.841 bus: 00:06:48.841 pci, vdev, 00:06:48.841 mempool: 00:06:48.841 ring, 00:06:48.841 dma: 00:06:48.841 00:06:48.841 net: 00:06:48.841 00:06:48.841 crypto: 00:06:48.841 00:06:48.841 compress: 00:06:48.841 00:06:48.841 vdpa: 00:06:48.841 00:06:48.841 00:06:48.841 Message: 00:06:48.841 ================= 00:06:48.841 Content Skipped 00:06:48.841 ================= 00:06:48.841 00:06:48.841 apps: 00:06:48.841 dumpcap: explicitly disabled via build config 00:06:48.841 graph: explicitly disabled via build config 00:06:48.841 pdump: explicitly disabled via build config 00:06:48.841 proc-info: explicitly disabled via build config 00:06:48.841 test-acl: explicitly disabled via build config 00:06:48.841 test-bbdev: explicitly disabled via build config 00:06:48.841 test-cmdline: explicitly disabled via build config 00:06:48.841 test-compress-perf: explicitly disabled via build config 00:06:48.841 test-crypto-perf: explicitly disabled via build config 00:06:48.841 test-dma-perf: explicitly disabled via build config 00:06:48.841 test-eventdev: explicitly disabled via build config 00:06:48.841 test-fib: explicitly disabled via build config 00:06:48.841 test-flow-perf: explicitly disabled via build config 00:06:48.841 test-gpudev: explicitly disabled via build config 00:06:48.841 test-mldev: explicitly disabled via build config 00:06:48.841 test-pipeline: explicitly disabled via build config 00:06:48.841 test-pmd: explicitly disabled via build config 00:06:48.841 test-regex: explicitly disabled via build config 00:06:48.841 test-sad: explicitly disabled via build config 00:06:48.841 test-security-perf: explicitly disabled via build config 00:06:48.841 00:06:48.841 libs: 00:06:48.841 argparse: explicitly disabled via build config 00:06:48.841 metrics: explicitly disabled via build config 00:06:48.841 acl: explicitly disabled via build config 00:06:48.841 bbdev: explicitly disabled via build config 00:06:48.841 bitratestats: explicitly disabled via build config 00:06:48.841 bpf: explicitly disabled via build config 00:06:48.841 cfgfile: explicitly disabled via build config 00:06:48.841 distributor: explicitly disabled via build config 00:06:48.841 efd: explicitly disabled via build config 00:06:48.841 eventdev: explicitly disabled via build config 00:06:48.841 dispatcher: explicitly disabled via build config 00:06:48.841 gpudev: explicitly disabled via build config 00:06:48.841 gro: explicitly disabled via build config 00:06:48.841 gso: explicitly disabled via build config 00:06:48.841 ip_frag: explicitly disabled via build config 00:06:48.841 jobstats: explicitly disabled via build config 00:06:48.841 latencystats: explicitly disabled via build config 00:06:48.841 lpm: explicitly disabled via build config 00:06:48.841 member: explicitly disabled via build config 00:06:48.841 pcapng: explicitly disabled via build config 00:06:48.841 rawdev: explicitly disabled via build config 00:06:48.841 regexdev: explicitly disabled via build config 00:06:48.841 mldev: explicitly disabled via build config 00:06:48.841 rib: explicitly disabled via build config 00:06:48.841 sched: explicitly disabled via build config 00:06:48.841 stack: explicitly disabled via build config 00:06:48.841 ipsec: explicitly disabled via build config 00:06:48.841 pdcp: explicitly disabled via build config 00:06:48.841 fib: explicitly disabled via build config 00:06:48.841 port: explicitly disabled via build config 00:06:48.841 pdump: explicitly disabled via build config 00:06:48.841 table: explicitly disabled via build config 00:06:48.841 pipeline: explicitly disabled via build config 00:06:48.841 graph: explicitly disabled via build config 00:06:48.841 node: explicitly disabled via build config 00:06:48.841 00:06:48.841 drivers: 00:06:48.841 common/cpt: not in enabled drivers build config 00:06:48.841 common/dpaax: not in enabled drivers build config 00:06:48.841 common/iavf: not in enabled drivers build config 00:06:48.841 common/idpf: not in enabled drivers build config 00:06:48.841 common/ionic: not in enabled drivers build config 00:06:48.841 common/mvep: not in enabled drivers build config 00:06:48.841 common/octeontx: not in enabled drivers build config 00:06:48.841 bus/auxiliary: not in enabled drivers build config 00:06:48.841 bus/cdx: not in enabled drivers build config 00:06:48.841 bus/dpaa: not in enabled drivers build config 00:06:48.841 bus/fslmc: not in enabled drivers build config 00:06:48.841 bus/ifpga: not in enabled drivers build config 00:06:48.841 bus/platform: not in enabled drivers build config 00:06:48.841 bus/uacce: not in enabled drivers build config 00:06:48.841 bus/vmbus: not in enabled drivers build config 00:06:48.841 common/cnxk: not in enabled drivers build config 00:06:48.841 common/mlx5: not in enabled drivers build config 00:06:48.841 common/nfp: not in enabled drivers build config 00:06:48.841 common/nitrox: not in enabled drivers build config 00:06:48.841 common/qat: not in enabled drivers build config 00:06:48.841 common/sfc_efx: not in enabled drivers build config 00:06:48.841 mempool/bucket: not in enabled drivers build config 00:06:48.841 mempool/cnxk: not in enabled drivers build config 00:06:48.841 mempool/dpaa: not in enabled drivers build config 00:06:48.841 mempool/dpaa2: not in enabled drivers build config 00:06:48.841 mempool/octeontx: not in enabled drivers build config 00:06:48.841 mempool/stack: not in enabled drivers build config 00:06:48.841 dma/cnxk: not in enabled drivers build config 00:06:48.841 dma/dpaa: not in enabled drivers build config 00:06:48.841 dma/dpaa2: not in enabled drivers build config 00:06:48.841 dma/hisilicon: not in enabled drivers build config 00:06:48.841 dma/idxd: not in enabled drivers build config 00:06:48.841 dma/ioat: not in enabled drivers build config 00:06:48.841 dma/skeleton: not in enabled drivers build config 00:06:48.841 net/af_packet: not in enabled drivers build config 00:06:48.841 net/af_xdp: not in enabled drivers build config 00:06:48.841 net/ark: not in enabled drivers build config 00:06:48.841 net/atlantic: not in enabled drivers build config 00:06:48.841 net/avp: not in enabled drivers build config 00:06:48.841 net/axgbe: not in enabled drivers build config 00:06:48.841 net/bnx2x: not in enabled drivers build config 00:06:48.841 net/bnxt: not in enabled drivers build config 00:06:48.841 net/bonding: not in enabled drivers build config 00:06:48.841 net/cnxk: not in enabled drivers build config 00:06:48.841 net/cpfl: not in enabled drivers build config 00:06:48.841 net/cxgbe: not in enabled drivers build config 00:06:48.841 net/dpaa: not in enabled drivers build config 00:06:48.841 net/dpaa2: not in enabled drivers build config 00:06:48.841 net/e1000: not in enabled drivers build config 00:06:48.841 net/ena: not in enabled drivers build config 00:06:48.841 net/enetc: not in enabled drivers build config 00:06:48.841 net/enetfec: not in enabled drivers build config 00:06:48.841 net/enic: not in enabled drivers build config 00:06:48.841 net/failsafe: not in enabled drivers build config 00:06:48.841 net/fm10k: not in enabled drivers build config 00:06:48.841 net/gve: not in enabled drivers build config 00:06:48.841 net/hinic: not in enabled drivers build config 00:06:48.841 net/hns3: not in enabled drivers build config 00:06:48.841 net/i40e: not in enabled drivers build config 00:06:48.841 net/iavf: not in enabled drivers build config 00:06:48.841 net/ice: not in enabled drivers build config 00:06:48.841 net/idpf: not in enabled drivers build config 00:06:48.841 net/igc: not in enabled drivers build config 00:06:48.841 net/ionic: not in enabled drivers build config 00:06:48.841 net/ipn3ke: not in enabled drivers build config 00:06:48.841 net/ixgbe: not in enabled drivers build config 00:06:48.841 net/mana: not in enabled drivers build config 00:06:48.841 net/memif: not in enabled drivers build config 00:06:48.841 net/mlx4: not in enabled drivers build config 00:06:48.841 net/mlx5: not in enabled drivers build config 00:06:48.841 net/mvneta: not in enabled drivers build config 00:06:48.841 net/mvpp2: not in enabled drivers build config 00:06:48.841 net/netvsc: not in enabled drivers build config 00:06:48.841 net/nfb: not in enabled drivers build config 00:06:48.841 net/nfp: not in enabled drivers build config 00:06:48.841 net/ngbe: not in enabled drivers build config 00:06:48.841 net/null: not in enabled drivers build config 00:06:48.841 net/octeontx: not in enabled drivers build config 00:06:48.841 net/octeon_ep: not in enabled drivers build config 00:06:48.841 net/pcap: not in enabled drivers build config 00:06:48.841 net/pfe: not in enabled drivers build config 00:06:48.841 net/qede: not in enabled drivers build config 00:06:48.841 net/ring: not in enabled drivers build config 00:06:48.841 net/sfc: not in enabled drivers build config 00:06:48.841 net/softnic: not in enabled drivers build config 00:06:48.841 net/tap: not in enabled drivers build config 00:06:48.841 net/thunderx: not in enabled drivers build config 00:06:48.841 net/txgbe: not in enabled drivers build config 00:06:48.841 net/vdev_netvsc: not in enabled drivers build config 00:06:48.841 net/vhost: not in enabled drivers build config 00:06:48.841 net/virtio: not in enabled drivers build config 00:06:48.841 net/vmxnet3: not in enabled drivers build config 00:06:48.841 raw/*: missing internal dependency, "rawdev" 00:06:48.841 crypto/armv8: not in enabled drivers build config 00:06:48.841 crypto/bcmfs: not in enabled drivers build config 00:06:48.841 crypto/caam_jr: not in enabled drivers build config 00:06:48.841 crypto/ccp: not in enabled drivers build config 00:06:48.841 crypto/cnxk: not in enabled drivers build config 00:06:48.841 crypto/dpaa_sec: not in enabled drivers build config 00:06:48.842 crypto/dpaa2_sec: not in enabled drivers build config 00:06:48.842 crypto/ipsec_mb: not in enabled drivers build config 00:06:48.842 crypto/mlx5: not in enabled drivers build config 00:06:48.842 crypto/mvsam: not in enabled drivers build config 00:06:48.842 crypto/nitrox: not in enabled drivers build config 00:06:48.842 crypto/null: not in enabled drivers build config 00:06:48.842 crypto/octeontx: not in enabled drivers build config 00:06:48.842 crypto/openssl: not in enabled drivers build config 00:06:48.842 crypto/scheduler: not in enabled drivers build config 00:06:48.842 crypto/uadk: not in enabled drivers build config 00:06:48.842 crypto/virtio: not in enabled drivers build config 00:06:48.842 compress/isal: not in enabled drivers build config 00:06:48.842 compress/mlx5: not in enabled drivers build config 00:06:48.842 compress/nitrox: not in enabled drivers build config 00:06:48.842 compress/octeontx: not in enabled drivers build config 00:06:48.842 compress/zlib: not in enabled drivers build config 00:06:48.842 regex/*: missing internal dependency, "regexdev" 00:06:48.842 ml/*: missing internal dependency, "mldev" 00:06:48.842 vdpa/ifc: not in enabled drivers build config 00:06:48.842 vdpa/mlx5: not in enabled drivers build config 00:06:48.842 vdpa/nfp: not in enabled drivers build config 00:06:48.842 vdpa/sfc: not in enabled drivers build config 00:06:48.842 event/*: missing internal dependency, "eventdev" 00:06:48.842 baseband/*: missing internal dependency, "bbdev" 00:06:48.842 gpu/*: missing internal dependency, "gpudev" 00:06:48.842 00:06:48.842 00:06:48.842 Build targets in project: 85 00:06:48.842 00:06:48.842 DPDK 24.03.0 00:06:48.842 00:06:48.842 User defined options 00:06:48.842 buildtype : debug 00:06:48.842 default_library : shared 00:06:48.842 libdir : lib 00:06:48.842 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:48.842 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:48.842 c_link_args : 00:06:48.842 cpu_instruction_set: native 00:06:48.842 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:06:48.842 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:06:48.842 enable_docs : false 00:06:48.842 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:06:48.842 enable_kmods : false 00:06:48.842 max_lcores : 128 00:06:48.842 tests : false 00:06:48.842 00:06:48.842 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:48.842 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:06:48.842 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:48.842 [2/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:48.842 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:48.842 [4/268] Linking static target lib/librte_kvargs.a 00:06:48.842 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:48.842 [6/268] Linking static target lib/librte_log.a 00:06:48.842 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:49.101 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:49.101 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:49.101 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:49.101 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:49.101 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:49.359 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:49.359 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:49.359 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:49.359 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:49.359 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:49.616 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:49.616 [19/268] Linking static target lib/librte_telemetry.a 00:06:49.616 [20/268] Linking target lib/librte_log.so.24.1 00:06:49.874 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:06:49.874 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:49.874 [23/268] Linking target lib/librte_kvargs.so.24.1 00:06:50.132 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:06:50.132 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:50.132 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:50.132 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:50.132 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:50.132 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:50.132 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:50.390 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:50.390 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:50.390 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:50.390 [34/268] Linking target lib/librte_telemetry.so.24.1 00:06:50.647 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:50.906 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:06:50.906 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:50.906 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:50.906 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:50.906 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:50.906 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:51.164 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:51.164 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:51.164 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:51.164 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:51.423 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:51.423 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:51.423 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:51.680 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:51.680 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:51.938 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:51.938 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:52.196 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:52.196 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:52.196 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:52.196 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:52.476 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:52.476 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:52.476 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:52.476 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:52.476 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:52.734 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:52.734 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:52.992 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:52.992 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:53.250 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:53.250 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:53.250 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:53.509 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:53.509 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:53.509 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:53.767 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:53.767 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:53.767 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:53.767 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:53.767 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:53.767 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:54.025 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:54.025 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:54.025 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:54.025 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:54.281 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:54.281 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:54.281 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:54.539 [85/268] Linking static target lib/librte_eal.a 00:06:54.539 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:54.539 [87/268] Linking static target lib/librte_ring.a 00:06:54.539 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:54.797 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:54.797 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:54.797 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:54.797 [92/268] Linking static target lib/librte_rcu.a 00:06:55.055 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:55.055 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:55.055 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:55.055 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:55.055 [97/268] Linking static target lib/librte_mempool.a 00:06:55.333 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:55.333 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:55.333 [100/268] Linking static target lib/librte_mbuf.a 00:06:55.333 [101/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:06:55.333 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:55.333 [103/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:06:55.591 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:55.591 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:55.847 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:55.847 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:55.847 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:56.105 [109/268] Linking static target lib/librte_meter.a 00:06:56.105 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:56.105 [111/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:56.105 [112/268] Linking static target lib/librte_net.a 00:06:56.361 [113/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:56.361 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:56.361 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:56.361 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:56.361 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:56.618 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:56.618 [119/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:56.874 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:57.131 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:57.131 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:57.388 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:57.645 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:57.645 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:57.645 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:57.645 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:57.645 [128/268] Linking static target lib/librte_pci.a 00:06:57.645 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:57.902 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:57.902 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:57.902 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:57.902 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:57.902 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:57.902 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:57.902 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:57.902 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:57.902 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:58.159 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:58.159 [140/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:58.159 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:58.159 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:58.159 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:06:58.159 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:58.159 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:58.417 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:06:58.417 [147/268] Linking static target lib/librte_ethdev.a 00:06:58.417 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:58.676 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:58.676 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:58.934 [151/268] Linking static target lib/librte_timer.a 00:06:58.934 [152/268] Linking static target lib/librte_cmdline.a 00:06:58.934 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:58.934 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:58.934 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:59.192 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:59.192 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:59.192 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:06:59.192 [159/268] Linking static target lib/librte_hash.a 00:06:59.450 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:59.708 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:59.708 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:59.708 [163/268] Linking static target lib/librte_compressdev.a 00:06:59.708 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:59.708 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:59.708 [166/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:59.966 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:00.225 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:00.225 [169/268] Linking static target lib/librte_dmadev.a 00:07:00.225 [170/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:00.225 [171/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:00.483 [172/268] Linking static target lib/librte_cryptodev.a 00:07:00.483 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:00.483 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:00.483 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:00.483 [176/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:00.483 [177/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:00.741 [178/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:00.999 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:00.999 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:00.999 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:00.999 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:00.999 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:01.258 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:01.258 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:01.258 [186/268] Linking static target lib/librte_power.a 00:07:01.516 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:01.516 [188/268] Linking static target lib/librte_reorder.a 00:07:01.774 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:01.774 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:01.774 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:01.774 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:01.774 [193/268] Linking static target lib/librte_security.a 00:07:02.341 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:02.341 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:02.598 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:02.856 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:02.856 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:02.856 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:02.856 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:03.114 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:03.114 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:03.405 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:03.405 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:03.685 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:03.685 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:03.685 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:03.685 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:03.685 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:03.685 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:03.943 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:03.943 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:03.943 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:03.943 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:03.943 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:03.943 [216/268] Linking static target drivers/librte_bus_vdev.a 00:07:04.201 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:04.201 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:04.201 [219/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:04.201 [220/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:04.201 [221/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:04.201 [222/268] Linking static target drivers/librte_bus_pci.a 00:07:04.459 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:04.459 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:04.459 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:04.459 [226/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:04.459 [227/268] Linking static target drivers/librte_mempool_ring.a 00:07:05.024 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:05.591 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:05.591 [230/268] Linking static target lib/librte_vhost.a 00:07:06.158 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:06.417 [232/268] Linking target lib/librte_eal.so.24.1 00:07:06.417 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:06.675 [234/268] Linking target lib/librte_timer.so.24.1 00:07:06.675 [235/268] Linking target lib/librte_meter.so.24.1 00:07:06.675 [236/268] Linking target lib/librte_pci.so.24.1 00:07:06.675 [237/268] Linking target lib/librte_ring.so.24.1 00:07:06.675 [238/268] Linking target lib/librte_dmadev.so.24.1 00:07:06.675 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:07:06.675 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:06.675 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:06.675 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:06.675 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:06.675 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:06.933 [245/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:06.933 [246/268] Linking target lib/librte_rcu.so.24.1 00:07:06.933 [247/268] Linking target lib/librte_mempool.so.24.1 00:07:06.933 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:07:06.933 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:06.933 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:06.933 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:07:06.933 [252/268] Linking target lib/librte_mbuf.so.24.1 00:07:07.191 [253/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:07.191 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:07.191 [255/268] Linking target lib/librte_reorder.so.24.1 00:07:07.191 [256/268] Linking target lib/librte_net.so.24.1 00:07:07.191 [257/268] Linking target lib/librte_compressdev.so.24.1 00:07:07.191 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:07:07.449 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:07.449 [260/268] Linking target lib/librte_hash.so.24.1 00:07:07.449 [261/268] Linking target lib/librte_cmdline.so.24.1 00:07:07.449 [262/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:07.449 [263/268] Linking target lib/librte_ethdev.so.24.1 00:07:07.449 [264/268] Linking target lib/librte_security.so.24.1 00:07:07.707 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:07.707 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:07.707 [267/268] Linking target lib/librte_power.so.24.1 00:07:07.707 [268/268] Linking target lib/librte_vhost.so.24.1 00:07:07.707 INFO: autodetecting backend as ninja 00:07:07.707 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:07:39.781 CC lib/ut/ut.o 00:07:39.781 CC lib/log/log.o 00:07:39.781 CC lib/log/log_deprecated.o 00:07:39.781 CC lib/log/log_flags.o 00:07:39.781 CC lib/ut_mock/mock.o 00:07:39.781 LIB libspdk_ut.a 00:07:39.781 LIB libspdk_log.a 00:07:39.781 LIB libspdk_ut_mock.a 00:07:39.781 SO libspdk_ut.so.2.0 00:07:39.781 SO libspdk_log.so.7.1 00:07:39.781 SO libspdk_ut_mock.so.6.0 00:07:39.781 SYMLINK libspdk_ut.so 00:07:39.781 SYMLINK libspdk_log.so 00:07:39.781 SYMLINK libspdk_ut_mock.so 00:07:39.781 CC lib/ioat/ioat.o 00:07:39.781 CXX lib/trace_parser/trace.o 00:07:39.781 CC lib/dma/dma.o 00:07:39.781 CC lib/util/base64.o 00:07:39.781 CC lib/util/bit_array.o 00:07:39.781 CC lib/util/crc16.o 00:07:39.781 CC lib/util/cpuset.o 00:07:39.781 CC lib/util/crc32.o 00:07:39.781 CC lib/util/crc32c.o 00:07:39.781 CC lib/vfio_user/host/vfio_user_pci.o 00:07:39.781 CC lib/util/crc32_ieee.o 00:07:39.781 CC lib/util/crc64.o 00:07:39.781 CC lib/vfio_user/host/vfio_user.o 00:07:39.781 CC lib/util/dif.o 00:07:39.781 CC lib/util/fd.o 00:07:39.781 LIB libspdk_dma.a 00:07:39.781 CC lib/util/fd_group.o 00:07:39.781 SO libspdk_dma.so.5.0 00:07:39.781 CC lib/util/file.o 00:07:39.781 CC lib/util/hexlify.o 00:07:39.781 LIB libspdk_ioat.a 00:07:39.781 SO libspdk_ioat.so.7.0 00:07:39.781 SYMLINK libspdk_dma.so 00:07:39.781 CC lib/util/iov.o 00:07:39.781 CC lib/util/math.o 00:07:39.781 CC lib/util/net.o 00:07:39.781 SYMLINK libspdk_ioat.so 00:07:39.781 CC lib/util/pipe.o 00:07:39.781 LIB libspdk_vfio_user.a 00:07:39.781 SO libspdk_vfio_user.so.5.0 00:07:39.781 CC lib/util/strerror_tls.o 00:07:39.781 CC lib/util/string.o 00:07:39.781 SYMLINK libspdk_vfio_user.so 00:07:39.781 CC lib/util/uuid.o 00:07:39.781 CC lib/util/xor.o 00:07:39.781 CC lib/util/zipf.o 00:07:39.781 CC lib/util/md5.o 00:07:39.781 LIB libspdk_util.a 00:07:39.781 SO libspdk_util.so.10.1 00:07:39.781 LIB libspdk_trace_parser.a 00:07:39.781 SYMLINK libspdk_util.so 00:07:39.781 SO libspdk_trace_parser.so.6.0 00:07:39.781 SYMLINK libspdk_trace_parser.so 00:07:39.781 CC lib/vmd/vmd.o 00:07:39.781 CC lib/vmd/led.o 00:07:39.781 CC lib/json/json_parse.o 00:07:39.781 CC lib/idxd/idxd.o 00:07:39.781 CC lib/rdma_utils/rdma_utils.o 00:07:39.781 CC lib/idxd/idxd_kernel.o 00:07:39.781 CC lib/json/json_util.o 00:07:39.781 CC lib/idxd/idxd_user.o 00:07:39.781 CC lib/env_dpdk/env.o 00:07:39.781 CC lib/conf/conf.o 00:07:39.781 CC lib/env_dpdk/memory.o 00:07:39.781 CC lib/env_dpdk/pci.o 00:07:39.781 LIB libspdk_conf.a 00:07:39.781 CC lib/json/json_write.o 00:07:39.781 CC lib/env_dpdk/init.o 00:07:39.781 CC lib/env_dpdk/threads.o 00:07:39.781 SO libspdk_conf.so.6.0 00:07:39.781 LIB libspdk_rdma_utils.a 00:07:39.781 SO libspdk_rdma_utils.so.1.0 00:07:39.781 SYMLINK libspdk_conf.so 00:07:39.781 CC lib/env_dpdk/pci_ioat.o 00:07:39.781 SYMLINK libspdk_rdma_utils.so 00:07:39.781 CC lib/env_dpdk/pci_virtio.o 00:07:39.781 CC lib/env_dpdk/pci_vmd.o 00:07:39.781 LIB libspdk_json.a 00:07:39.781 CC lib/env_dpdk/pci_idxd.o 00:07:39.781 CC lib/rdma_provider/common.o 00:07:39.781 SO libspdk_json.so.6.0 00:07:39.781 LIB libspdk_idxd.a 00:07:39.781 SO libspdk_idxd.so.12.1 00:07:39.781 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:39.781 CC lib/env_dpdk/pci_event.o 00:07:39.781 CC lib/env_dpdk/sigbus_handler.o 00:07:39.781 LIB libspdk_vmd.a 00:07:39.781 SYMLINK libspdk_json.so 00:07:39.781 CC lib/env_dpdk/pci_dpdk.o 00:07:39.781 SO libspdk_vmd.so.6.0 00:07:39.781 SYMLINK libspdk_idxd.so 00:07:39.782 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:39.782 SYMLINK libspdk_vmd.so 00:07:39.782 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:39.782 LIB libspdk_rdma_provider.a 00:07:39.782 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:39.782 CC lib/jsonrpc/jsonrpc_server.o 00:07:39.782 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:39.782 CC lib/jsonrpc/jsonrpc_client.o 00:07:39.782 SO libspdk_rdma_provider.so.7.0 00:07:39.782 SYMLINK libspdk_rdma_provider.so 00:07:39.782 LIB libspdk_jsonrpc.a 00:07:39.782 SO libspdk_jsonrpc.so.6.0 00:07:39.782 SYMLINK libspdk_jsonrpc.so 00:07:39.782 LIB libspdk_env_dpdk.a 00:07:39.782 CC lib/rpc/rpc.o 00:07:39.782 SO libspdk_env_dpdk.so.15.1 00:07:39.782 SYMLINK libspdk_env_dpdk.so 00:07:39.782 LIB libspdk_rpc.a 00:07:39.782 SO libspdk_rpc.so.6.0 00:07:39.782 SYMLINK libspdk_rpc.so 00:07:39.782 CC lib/keyring/keyring.o 00:07:39.782 CC lib/keyring/keyring_rpc.o 00:07:39.782 CC lib/notify/notify.o 00:07:39.782 CC lib/notify/notify_rpc.o 00:07:39.782 CC lib/trace/trace.o 00:07:39.782 CC lib/trace/trace_flags.o 00:07:39.782 CC lib/trace/trace_rpc.o 00:07:39.782 LIB libspdk_notify.a 00:07:39.782 LIB libspdk_keyring.a 00:07:39.782 SO libspdk_notify.so.6.0 00:07:39.782 SO libspdk_keyring.so.2.0 00:07:39.782 SYMLINK libspdk_notify.so 00:07:39.782 LIB libspdk_trace.a 00:07:39.782 SYMLINK libspdk_keyring.so 00:07:39.782 SO libspdk_trace.so.11.0 00:07:39.782 SYMLINK libspdk_trace.so 00:07:39.782 CC lib/thread/thread.o 00:07:39.782 CC lib/thread/iobuf.o 00:07:39.782 CC lib/sock/sock.o 00:07:39.782 CC lib/sock/sock_rpc.o 00:07:39.782 LIB libspdk_sock.a 00:07:39.782 SO libspdk_sock.so.10.0 00:07:40.040 SYMLINK libspdk_sock.so 00:07:40.299 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:40.299 CC lib/nvme/nvme_ctrlr.o 00:07:40.299 CC lib/nvme/nvme_fabric.o 00:07:40.299 CC lib/nvme/nvme_ns.o 00:07:40.299 CC lib/nvme/nvme_ns_cmd.o 00:07:40.299 CC lib/nvme/nvme_pcie.o 00:07:40.299 CC lib/nvme/nvme_qpair.o 00:07:40.299 CC lib/nvme/nvme_pcie_common.o 00:07:40.299 CC lib/nvme/nvme.o 00:07:40.866 LIB libspdk_thread.a 00:07:40.866 SO libspdk_thread.so.11.0 00:07:40.866 SYMLINK libspdk_thread.so 00:07:40.866 CC lib/nvme/nvme_quirks.o 00:07:40.866 CC lib/nvme/nvme_transport.o 00:07:41.124 CC lib/nvme/nvme_discovery.o 00:07:41.124 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:41.124 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:41.124 CC lib/nvme/nvme_tcp.o 00:07:41.124 CC lib/nvme/nvme_opal.o 00:07:41.382 CC lib/nvme/nvme_io_msg.o 00:07:41.382 CC lib/nvme/nvme_poll_group.o 00:07:41.641 CC lib/nvme/nvme_zns.o 00:07:41.641 CC lib/nvme/nvme_stubs.o 00:07:41.641 CC lib/nvme/nvme_auth.o 00:07:41.641 CC lib/nvme/nvme_cuse.o 00:07:41.898 CC lib/nvme/nvme_rdma.o 00:07:41.898 CC lib/accel/accel.o 00:07:41.898 CC lib/accel/accel_rpc.o 00:07:42.156 CC lib/blob/blobstore.o 00:07:42.156 CC lib/accel/accel_sw.o 00:07:42.414 CC lib/init/json_config.o 00:07:42.414 CC lib/virtio/virtio.o 00:07:42.672 CC lib/virtio/virtio_vhost_user.o 00:07:42.672 CC lib/init/subsystem.o 00:07:42.672 CC lib/virtio/virtio_vfio_user.o 00:07:42.672 CC lib/init/subsystem_rpc.o 00:07:42.930 CC lib/virtio/virtio_pci.o 00:07:42.930 CC lib/init/rpc.o 00:07:42.930 CC lib/blob/request.o 00:07:42.930 CC lib/blob/zeroes.o 00:07:42.930 CC lib/fsdev/fsdev.o 00:07:42.930 CC lib/blob/blob_bs_dev.o 00:07:43.188 LIB libspdk_init.a 00:07:43.188 SO libspdk_init.so.6.0 00:07:43.188 LIB libspdk_accel.a 00:07:43.188 LIB libspdk_virtio.a 00:07:43.188 CC lib/fsdev/fsdev_io.o 00:07:43.188 SYMLINK libspdk_init.so 00:07:43.188 SO libspdk_accel.so.16.0 00:07:43.188 SO libspdk_virtio.so.7.0 00:07:43.188 CC lib/fsdev/fsdev_rpc.o 00:07:43.188 LIB libspdk_nvme.a 00:07:43.188 SYMLINK libspdk_accel.so 00:07:43.188 SYMLINK libspdk_virtio.so 00:07:43.446 CC lib/event/app.o 00:07:43.446 CC lib/event/reactor.o 00:07:43.446 CC lib/event/log_rpc.o 00:07:43.446 CC lib/event/app_rpc.o 00:07:43.446 CC lib/event/scheduler_static.o 00:07:43.446 SO libspdk_nvme.so.15.0 00:07:43.446 CC lib/bdev/bdev.o 00:07:43.704 CC lib/bdev/bdev_rpc.o 00:07:43.704 CC lib/bdev/bdev_zone.o 00:07:43.704 CC lib/bdev/part.o 00:07:43.704 LIB libspdk_fsdev.a 00:07:43.704 CC lib/bdev/scsi_nvme.o 00:07:43.704 SO libspdk_fsdev.so.2.0 00:07:43.704 SYMLINK libspdk_nvme.so 00:07:43.704 SYMLINK libspdk_fsdev.so 00:07:43.962 LIB libspdk_event.a 00:07:43.962 SO libspdk_event.so.14.0 00:07:43.962 SYMLINK libspdk_event.so 00:07:43.962 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:44.896 LIB libspdk_fuse_dispatcher.a 00:07:44.896 SO libspdk_fuse_dispatcher.so.1.0 00:07:44.896 SYMLINK libspdk_fuse_dispatcher.so 00:07:46.269 LIB libspdk_blob.a 00:07:46.269 SO libspdk_blob.so.12.0 00:07:46.269 SYMLINK libspdk_blob.so 00:07:46.527 LIB libspdk_bdev.a 00:07:46.527 SO libspdk_bdev.so.17.0 00:07:46.527 CC lib/blobfs/tree.o 00:07:46.527 CC lib/blobfs/blobfs.o 00:07:46.527 CC lib/lvol/lvol.o 00:07:46.527 SYMLINK libspdk_bdev.so 00:07:46.785 CC lib/ftl/ftl_core.o 00:07:46.785 CC lib/ftl/ftl_init.o 00:07:46.785 CC lib/ftl/ftl_layout.o 00:07:46.785 CC lib/scsi/dev.o 00:07:46.785 CC lib/nvmf/ctrlr.o 00:07:46.785 CC lib/scsi/lun.o 00:07:46.785 CC lib/ublk/ublk.o 00:07:46.785 CC lib/nbd/nbd.o 00:07:47.042 CC lib/nbd/nbd_rpc.o 00:07:47.042 CC lib/ublk/ublk_rpc.o 00:07:47.301 CC lib/scsi/port.o 00:07:47.301 CC lib/ftl/ftl_debug.o 00:07:47.301 CC lib/nvmf/ctrlr_discovery.o 00:07:47.301 LIB libspdk_nbd.a 00:07:47.301 CC lib/ftl/ftl_io.o 00:07:47.301 CC lib/scsi/scsi.o 00:07:47.301 SO libspdk_nbd.so.7.0 00:07:47.301 CC lib/ftl/ftl_sb.o 00:07:47.301 SYMLINK libspdk_nbd.so 00:07:47.301 CC lib/scsi/scsi_bdev.o 00:07:47.559 LIB libspdk_blobfs.a 00:07:47.559 SO libspdk_blobfs.so.11.0 00:07:47.559 CC lib/nvmf/ctrlr_bdev.o 00:07:47.559 SYMLINK libspdk_blobfs.so 00:07:47.559 CC lib/ftl/ftl_l2p.o 00:07:47.559 CC lib/scsi/scsi_pr.o 00:07:47.559 CC lib/scsi/scsi_rpc.o 00:07:47.559 LIB libspdk_ublk.a 00:07:47.559 SO libspdk_ublk.so.3.0 00:07:47.559 CC lib/nvmf/subsystem.o 00:07:47.817 SYMLINK libspdk_ublk.so 00:07:47.817 CC lib/nvmf/nvmf.o 00:07:47.817 CC lib/scsi/task.o 00:07:47.817 CC lib/ftl/ftl_l2p_flat.o 00:07:47.817 LIB libspdk_lvol.a 00:07:47.817 SO libspdk_lvol.so.11.0 00:07:47.817 CC lib/ftl/ftl_nv_cache.o 00:07:47.817 SYMLINK libspdk_lvol.so 00:07:47.817 CC lib/ftl/ftl_band.o 00:07:47.817 CC lib/nvmf/nvmf_rpc.o 00:07:48.076 CC lib/ftl/ftl_band_ops.o 00:07:48.076 LIB libspdk_scsi.a 00:07:48.076 CC lib/ftl/ftl_writer.o 00:07:48.076 SO libspdk_scsi.so.9.0 00:07:48.076 SYMLINK libspdk_scsi.so 00:07:48.334 CC lib/nvmf/transport.o 00:07:48.334 CC lib/nvmf/tcp.o 00:07:48.334 CC lib/nvmf/stubs.o 00:07:48.334 CC lib/ftl/ftl_rq.o 00:07:48.334 CC lib/ftl/ftl_reloc.o 00:07:48.592 CC lib/nvmf/mdns_server.o 00:07:48.592 CC lib/ftl/ftl_l2p_cache.o 00:07:48.850 CC lib/nvmf/rdma.o 00:07:48.850 CC lib/nvmf/auth.o 00:07:48.850 CC lib/ftl/ftl_p2l.o 00:07:48.850 CC lib/ftl/ftl_p2l_log.o 00:07:48.850 CC lib/ftl/mngt/ftl_mngt.o 00:07:49.108 CC lib/iscsi/conn.o 00:07:49.108 CC lib/iscsi/init_grp.o 00:07:49.108 CC lib/iscsi/iscsi.o 00:07:49.108 CC lib/iscsi/param.o 00:07:49.108 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:49.367 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:49.367 CC lib/iscsi/portal_grp.o 00:07:49.367 CC lib/vhost/vhost.o 00:07:49.367 CC lib/iscsi/tgt_node.o 00:07:49.624 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:49.624 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:49.624 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:49.624 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:49.624 CC lib/vhost/vhost_rpc.o 00:07:49.624 CC lib/iscsi/iscsi_subsystem.o 00:07:49.882 CC lib/iscsi/iscsi_rpc.o 00:07:49.882 CC lib/iscsi/task.o 00:07:49.882 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:49.882 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:49.882 CC lib/vhost/vhost_scsi.o 00:07:50.139 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:50.139 CC lib/vhost/vhost_blk.o 00:07:50.139 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:50.139 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:50.139 CC lib/vhost/rte_vhost_user.o 00:07:50.397 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:50.397 CC lib/ftl/utils/ftl_conf.o 00:07:50.397 CC lib/ftl/utils/ftl_md.o 00:07:50.397 CC lib/ftl/utils/ftl_mempool.o 00:07:50.655 CC lib/ftl/utils/ftl_bitmap.o 00:07:50.655 LIB libspdk_iscsi.a 00:07:50.655 CC lib/ftl/utils/ftl_property.o 00:07:50.655 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:50.655 SO libspdk_iscsi.so.8.0 00:07:50.655 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:50.913 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:50.913 LIB libspdk_nvmf.a 00:07:50.913 SYMLINK libspdk_iscsi.so 00:07:50.913 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:50.913 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:50.913 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:50.913 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:50.913 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:50.913 SO libspdk_nvmf.so.20.0 00:07:51.170 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:51.170 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:51.170 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:51.170 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:51.170 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:51.170 CC lib/ftl/base/ftl_base_dev.o 00:07:51.170 CC lib/ftl/base/ftl_base_bdev.o 00:07:51.170 CC lib/ftl/ftl_trace.o 00:07:51.170 SYMLINK libspdk_nvmf.so 00:07:51.428 LIB libspdk_vhost.a 00:07:51.428 SO libspdk_vhost.so.8.0 00:07:51.428 LIB libspdk_ftl.a 00:07:51.685 SYMLINK libspdk_vhost.so 00:07:51.685 SO libspdk_ftl.so.9.0 00:07:51.943 SYMLINK libspdk_ftl.so 00:07:52.512 CC module/env_dpdk/env_dpdk_rpc.o 00:07:52.512 CC module/accel/dsa/accel_dsa.o 00:07:52.512 CC module/accel/error/accel_error.o 00:07:52.512 CC module/keyring/linux/keyring.o 00:07:52.512 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:52.512 CC module/sock/posix/posix.o 00:07:52.512 CC module/keyring/file/keyring.o 00:07:52.512 CC module/accel/ioat/accel_ioat.o 00:07:52.512 CC module/fsdev/aio/fsdev_aio.o 00:07:52.512 CC module/blob/bdev/blob_bdev.o 00:07:52.512 LIB libspdk_env_dpdk_rpc.a 00:07:52.512 SO libspdk_env_dpdk_rpc.so.6.0 00:07:52.512 SYMLINK libspdk_env_dpdk_rpc.so 00:07:52.512 CC module/keyring/linux/keyring_rpc.o 00:07:52.770 CC module/accel/ioat/accel_ioat_rpc.o 00:07:52.770 LIB libspdk_scheduler_dynamic.a 00:07:52.770 CC module/keyring/file/keyring_rpc.o 00:07:52.770 CC module/accel/error/accel_error_rpc.o 00:07:52.770 SO libspdk_scheduler_dynamic.so.4.0 00:07:52.770 LIB libspdk_keyring_linux.a 00:07:52.770 SYMLINK libspdk_scheduler_dynamic.so 00:07:52.770 LIB libspdk_blob_bdev.a 00:07:52.770 CC module/accel/dsa/accel_dsa_rpc.o 00:07:52.770 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:52.770 SO libspdk_keyring_linux.so.1.0 00:07:52.770 SO libspdk_blob_bdev.so.12.0 00:07:52.770 LIB libspdk_accel_ioat.a 00:07:52.770 LIB libspdk_keyring_file.a 00:07:52.770 LIB libspdk_accel_error.a 00:07:52.770 SO libspdk_accel_ioat.so.6.0 00:07:52.770 SYMLINK libspdk_keyring_linux.so 00:07:52.770 SO libspdk_keyring_file.so.2.0 00:07:52.770 SO libspdk_accel_error.so.2.0 00:07:53.027 SYMLINK libspdk_blob_bdev.so 00:07:53.027 SYMLINK libspdk_accel_ioat.so 00:07:53.027 LIB libspdk_accel_dsa.a 00:07:53.027 SYMLINK libspdk_keyring_file.so 00:07:53.027 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:53.027 SYMLINK libspdk_accel_error.so 00:07:53.027 CC module/accel/iaa/accel_iaa.o 00:07:53.027 SO libspdk_accel_dsa.so.5.0 00:07:53.027 LIB libspdk_scheduler_dpdk_governor.a 00:07:53.027 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:53.027 SYMLINK libspdk_accel_dsa.so 00:07:53.027 CC module/fsdev/aio/linux_aio_mgr.o 00:07:53.027 CC module/scheduler/gscheduler/gscheduler.o 00:07:53.027 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:53.027 CC module/sock/uring/uring.o 00:07:53.284 CC module/accel/iaa/accel_iaa_rpc.o 00:07:53.284 LIB libspdk_sock_posix.a 00:07:53.284 CC module/bdev/delay/vbdev_delay.o 00:07:53.284 CC module/blobfs/bdev/blobfs_bdev.o 00:07:53.284 SO libspdk_sock_posix.so.6.0 00:07:53.284 LIB libspdk_scheduler_gscheduler.a 00:07:53.284 LIB libspdk_fsdev_aio.a 00:07:53.284 CC module/bdev/error/vbdev_error.o 00:07:53.284 SO libspdk_scheduler_gscheduler.so.4.0 00:07:53.284 SO libspdk_fsdev_aio.so.1.0 00:07:53.284 LIB libspdk_accel_iaa.a 00:07:53.284 CC module/bdev/gpt/gpt.o 00:07:53.284 SO libspdk_accel_iaa.so.3.0 00:07:53.284 SYMLINK libspdk_scheduler_gscheduler.so 00:07:53.284 SYMLINK libspdk_sock_posix.so 00:07:53.284 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:53.284 SYMLINK libspdk_fsdev_aio.so 00:07:53.542 CC module/bdev/lvol/vbdev_lvol.o 00:07:53.542 SYMLINK libspdk_accel_iaa.so 00:07:53.542 CC module/bdev/gpt/vbdev_gpt.o 00:07:53.542 CC module/bdev/null/bdev_null.o 00:07:53.542 CC module/bdev/malloc/bdev_malloc.o 00:07:53.542 LIB libspdk_blobfs_bdev.a 00:07:53.542 CC module/bdev/null/bdev_null_rpc.o 00:07:53.542 SO libspdk_blobfs_bdev.so.6.0 00:07:53.542 CC module/bdev/error/vbdev_error_rpc.o 00:07:53.542 CC module/bdev/nvme/bdev_nvme.o 00:07:53.542 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:53.542 SYMLINK libspdk_blobfs_bdev.so 00:07:53.542 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:53.799 LIB libspdk_bdev_gpt.a 00:07:53.799 CC module/bdev/nvme/nvme_rpc.o 00:07:53.799 LIB libspdk_bdev_error.a 00:07:53.799 SO libspdk_bdev_gpt.so.6.0 00:07:53.799 SO libspdk_bdev_error.so.6.0 00:07:53.799 LIB libspdk_bdev_null.a 00:07:53.799 LIB libspdk_sock_uring.a 00:07:53.799 LIB libspdk_bdev_delay.a 00:07:53.799 SYMLINK libspdk_bdev_gpt.so 00:07:53.799 SO libspdk_sock_uring.so.5.0 00:07:53.799 SO libspdk_bdev_null.so.6.0 00:07:53.799 SO libspdk_bdev_delay.so.6.0 00:07:53.799 SYMLINK libspdk_bdev_error.so 00:07:54.056 SYMLINK libspdk_sock_uring.so 00:07:54.056 CC module/bdev/nvme/bdev_mdns_client.o 00:07:54.056 SYMLINK libspdk_bdev_null.so 00:07:54.056 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:54.056 SYMLINK libspdk_bdev_delay.so 00:07:54.056 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:54.056 CC module/bdev/nvme/vbdev_opal.o 00:07:54.056 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:54.056 CC module/bdev/passthru/vbdev_passthru.o 00:07:54.056 CC module/bdev/raid/bdev_raid.o 00:07:54.056 CC module/bdev/raid/bdev_raid_rpc.o 00:07:54.056 LIB libspdk_bdev_malloc.a 00:07:54.056 CC module/bdev/split/vbdev_split.o 00:07:54.056 SO libspdk_bdev_malloc.so.6.0 00:07:54.313 SYMLINK libspdk_bdev_malloc.so 00:07:54.313 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:54.313 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:54.313 CC module/bdev/raid/bdev_raid_sb.o 00:07:54.313 CC module/bdev/raid/raid0.o 00:07:54.313 LIB libspdk_bdev_lvol.a 00:07:54.313 CC module/bdev/split/vbdev_split_rpc.o 00:07:54.313 SO libspdk_bdev_lvol.so.6.0 00:07:54.313 LIB libspdk_bdev_passthru.a 00:07:54.313 SYMLINK libspdk_bdev_lvol.so 00:07:54.570 SO libspdk_bdev_passthru.so.6.0 00:07:54.570 SYMLINK libspdk_bdev_passthru.so 00:07:54.570 LIB libspdk_bdev_split.a 00:07:54.570 CC module/bdev/raid/raid1.o 00:07:54.570 SO libspdk_bdev_split.so.6.0 00:07:54.570 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:54.570 CC module/bdev/uring/bdev_uring.o 00:07:54.570 CC module/bdev/aio/bdev_aio.o 00:07:54.570 CC module/bdev/ftl/bdev_ftl.o 00:07:54.570 SYMLINK libspdk_bdev_split.so 00:07:54.570 CC module/bdev/uring/bdev_uring_rpc.o 00:07:54.830 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:54.830 CC module/bdev/iscsi/bdev_iscsi.o 00:07:54.830 CC module/bdev/aio/bdev_aio_rpc.o 00:07:54.830 CC module/bdev/raid/concat.o 00:07:54.830 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:54.830 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:55.087 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:55.087 LIB libspdk_bdev_uring.a 00:07:55.087 SO libspdk_bdev_uring.so.6.0 00:07:55.087 LIB libspdk_bdev_aio.a 00:07:55.087 SYMLINK libspdk_bdev_uring.so 00:07:55.087 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:55.087 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:55.087 SO libspdk_bdev_aio.so.6.0 00:07:55.087 LIB libspdk_bdev_zone_block.a 00:07:55.087 LIB libspdk_bdev_ftl.a 00:07:55.087 SO libspdk_bdev_zone_block.so.6.0 00:07:55.345 SO libspdk_bdev_ftl.so.6.0 00:07:55.345 SYMLINK libspdk_bdev_aio.so 00:07:55.345 LIB libspdk_bdev_raid.a 00:07:55.345 SYMLINK libspdk_bdev_zone_block.so 00:07:55.345 SYMLINK libspdk_bdev_ftl.so 00:07:55.345 SO libspdk_bdev_raid.so.6.0 00:07:55.345 LIB libspdk_bdev_iscsi.a 00:07:55.345 LIB libspdk_bdev_virtio.a 00:07:55.345 SO libspdk_bdev_iscsi.so.6.0 00:07:55.345 SYMLINK libspdk_bdev_raid.so 00:07:55.345 SO libspdk_bdev_virtio.so.6.0 00:07:55.345 SYMLINK libspdk_bdev_iscsi.so 00:07:55.602 SYMLINK libspdk_bdev_virtio.so 00:07:56.169 LIB libspdk_bdev_nvme.a 00:07:56.485 SO libspdk_bdev_nvme.so.7.1 00:07:56.485 SYMLINK libspdk_bdev_nvme.so 00:07:57.053 CC module/event/subsystems/scheduler/scheduler.o 00:07:57.053 CC module/event/subsystems/vmd/vmd.o 00:07:57.053 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:57.053 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:57.053 CC module/event/subsystems/iobuf/iobuf.o 00:07:57.053 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:57.053 CC module/event/subsystems/keyring/keyring.o 00:07:57.053 CC module/event/subsystems/sock/sock.o 00:07:57.053 CC module/event/subsystems/fsdev/fsdev.o 00:07:57.053 LIB libspdk_event_scheduler.a 00:07:57.053 LIB libspdk_event_vhost_blk.a 00:07:57.053 LIB libspdk_event_keyring.a 00:07:57.053 SO libspdk_event_scheduler.so.4.0 00:07:57.053 SO libspdk_event_vhost_blk.so.3.0 00:07:57.053 LIB libspdk_event_vmd.a 00:07:57.053 LIB libspdk_event_fsdev.a 00:07:57.053 SO libspdk_event_keyring.so.1.0 00:07:57.053 SO libspdk_event_vmd.so.6.0 00:07:57.053 SO libspdk_event_fsdev.so.1.0 00:07:57.311 SYMLINK libspdk_event_vhost_blk.so 00:07:57.311 SYMLINK libspdk_event_scheduler.so 00:07:57.311 SYMLINK libspdk_event_keyring.so 00:07:57.311 LIB libspdk_event_iobuf.a 00:07:57.311 LIB libspdk_event_sock.a 00:07:57.311 SYMLINK libspdk_event_vmd.so 00:07:57.311 SYMLINK libspdk_event_fsdev.so 00:07:57.311 SO libspdk_event_sock.so.5.0 00:07:57.311 SO libspdk_event_iobuf.so.3.0 00:07:57.311 SYMLINK libspdk_event_iobuf.so 00:07:57.311 SYMLINK libspdk_event_sock.so 00:07:57.569 CC module/event/subsystems/accel/accel.o 00:07:57.828 LIB libspdk_event_accel.a 00:07:57.828 SO libspdk_event_accel.so.6.0 00:07:57.828 SYMLINK libspdk_event_accel.so 00:07:58.086 CC module/event/subsystems/bdev/bdev.o 00:07:58.344 LIB libspdk_event_bdev.a 00:07:58.344 SO libspdk_event_bdev.so.6.0 00:07:58.344 SYMLINK libspdk_event_bdev.so 00:07:58.603 CC module/event/subsystems/nbd/nbd.o 00:07:58.603 CC module/event/subsystems/ublk/ublk.o 00:07:58.603 CC module/event/subsystems/scsi/scsi.o 00:07:58.603 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:58.603 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:58.861 LIB libspdk_event_nbd.a 00:07:58.861 LIB libspdk_event_ublk.a 00:07:58.861 LIB libspdk_event_scsi.a 00:07:58.861 SO libspdk_event_nbd.so.6.0 00:07:58.861 SO libspdk_event_ublk.so.3.0 00:07:58.861 SO libspdk_event_scsi.so.6.0 00:07:58.861 SYMLINK libspdk_event_nbd.so 00:07:58.861 SYMLINK libspdk_event_ublk.so 00:07:58.861 SYMLINK libspdk_event_scsi.so 00:07:58.861 LIB libspdk_event_nvmf.a 00:07:59.119 SO libspdk_event_nvmf.so.6.0 00:07:59.119 SYMLINK libspdk_event_nvmf.so 00:07:59.119 CC module/event/subsystems/iscsi/iscsi.o 00:07:59.119 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:59.378 LIB libspdk_event_vhost_scsi.a 00:07:59.378 LIB libspdk_event_iscsi.a 00:07:59.378 SO libspdk_event_vhost_scsi.so.3.0 00:07:59.378 SO libspdk_event_iscsi.so.6.0 00:07:59.636 SYMLINK libspdk_event_vhost_scsi.so 00:07:59.636 SYMLINK libspdk_event_iscsi.so 00:07:59.636 SO libspdk.so.6.0 00:07:59.636 SYMLINK libspdk.so 00:07:59.894 CXX app/trace/trace.o 00:07:59.894 CC app/trace_record/trace_record.o 00:07:59.894 TEST_HEADER include/spdk/accel.h 00:07:59.894 TEST_HEADER include/spdk/accel_module.h 00:07:59.894 TEST_HEADER include/spdk/assert.h 00:07:59.894 TEST_HEADER include/spdk/barrier.h 00:07:59.894 TEST_HEADER include/spdk/base64.h 00:07:59.894 TEST_HEADER include/spdk/bdev.h 00:07:59.894 TEST_HEADER include/spdk/bdev_module.h 00:07:59.894 TEST_HEADER include/spdk/bdev_zone.h 00:07:59.894 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:59.894 TEST_HEADER include/spdk/bit_array.h 00:07:59.894 TEST_HEADER include/spdk/bit_pool.h 00:07:59.894 TEST_HEADER include/spdk/blob_bdev.h 00:07:59.894 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:59.894 TEST_HEADER include/spdk/blobfs.h 00:07:59.894 TEST_HEADER include/spdk/blob.h 00:07:59.894 TEST_HEADER include/spdk/conf.h 00:07:59.894 TEST_HEADER include/spdk/config.h 00:07:59.894 TEST_HEADER include/spdk/cpuset.h 00:07:59.894 TEST_HEADER include/spdk/crc16.h 00:07:59.894 TEST_HEADER include/spdk/crc32.h 00:07:59.894 TEST_HEADER include/spdk/crc64.h 00:07:59.894 TEST_HEADER include/spdk/dif.h 00:07:59.894 TEST_HEADER include/spdk/dma.h 00:07:59.894 TEST_HEADER include/spdk/endian.h 00:07:59.894 TEST_HEADER include/spdk/env_dpdk.h 00:07:59.894 TEST_HEADER include/spdk/env.h 00:07:59.894 TEST_HEADER include/spdk/event.h 00:07:59.894 TEST_HEADER include/spdk/fd_group.h 00:08:00.153 TEST_HEADER include/spdk/fd.h 00:08:00.153 TEST_HEADER include/spdk/file.h 00:08:00.153 TEST_HEADER include/spdk/fsdev.h 00:08:00.153 TEST_HEADER include/spdk/fsdev_module.h 00:08:00.153 TEST_HEADER include/spdk/ftl.h 00:08:00.153 TEST_HEADER include/spdk/fuse_dispatcher.h 00:08:00.153 TEST_HEADER include/spdk/gpt_spec.h 00:08:00.153 TEST_HEADER include/spdk/hexlify.h 00:08:00.153 CC test/thread/poller_perf/poller_perf.o 00:08:00.153 CC examples/ioat/perf/perf.o 00:08:00.153 CC examples/util/zipf/zipf.o 00:08:00.153 TEST_HEADER include/spdk/histogram_data.h 00:08:00.153 TEST_HEADER include/spdk/idxd.h 00:08:00.153 TEST_HEADER include/spdk/idxd_spec.h 00:08:00.153 TEST_HEADER include/spdk/init.h 00:08:00.153 TEST_HEADER include/spdk/ioat.h 00:08:00.153 TEST_HEADER include/spdk/ioat_spec.h 00:08:00.153 TEST_HEADER include/spdk/iscsi_spec.h 00:08:00.153 TEST_HEADER include/spdk/json.h 00:08:00.153 TEST_HEADER include/spdk/jsonrpc.h 00:08:00.153 TEST_HEADER include/spdk/keyring.h 00:08:00.153 TEST_HEADER include/spdk/keyring_module.h 00:08:00.153 TEST_HEADER include/spdk/likely.h 00:08:00.153 TEST_HEADER include/spdk/log.h 00:08:00.153 TEST_HEADER include/spdk/lvol.h 00:08:00.153 CC test/dma/test_dma/test_dma.o 00:08:00.153 TEST_HEADER include/spdk/md5.h 00:08:00.153 TEST_HEADER include/spdk/memory.h 00:08:00.153 TEST_HEADER include/spdk/mmio.h 00:08:00.153 TEST_HEADER include/spdk/nbd.h 00:08:00.153 TEST_HEADER include/spdk/net.h 00:08:00.153 CC test/app/bdev_svc/bdev_svc.o 00:08:00.153 TEST_HEADER include/spdk/notify.h 00:08:00.153 TEST_HEADER include/spdk/nvme.h 00:08:00.153 TEST_HEADER include/spdk/nvme_intel.h 00:08:00.153 TEST_HEADER include/spdk/nvme_ocssd.h 00:08:00.153 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:08:00.153 TEST_HEADER include/spdk/nvme_spec.h 00:08:00.153 TEST_HEADER include/spdk/nvme_zns.h 00:08:00.153 TEST_HEADER include/spdk/nvmf_cmd.h 00:08:00.153 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:08:00.153 TEST_HEADER include/spdk/nvmf.h 00:08:00.153 TEST_HEADER include/spdk/nvmf_spec.h 00:08:00.153 TEST_HEADER include/spdk/nvmf_transport.h 00:08:00.153 TEST_HEADER include/spdk/opal.h 00:08:00.153 TEST_HEADER include/spdk/opal_spec.h 00:08:00.153 TEST_HEADER include/spdk/pci_ids.h 00:08:00.153 TEST_HEADER include/spdk/pipe.h 00:08:00.153 TEST_HEADER include/spdk/queue.h 00:08:00.153 TEST_HEADER include/spdk/reduce.h 00:08:00.153 TEST_HEADER include/spdk/rpc.h 00:08:00.153 TEST_HEADER include/spdk/scheduler.h 00:08:00.153 TEST_HEADER include/spdk/scsi.h 00:08:00.153 TEST_HEADER include/spdk/scsi_spec.h 00:08:00.153 TEST_HEADER include/spdk/sock.h 00:08:00.153 CC test/env/mem_callbacks/mem_callbacks.o 00:08:00.153 TEST_HEADER include/spdk/stdinc.h 00:08:00.153 TEST_HEADER include/spdk/string.h 00:08:00.153 TEST_HEADER include/spdk/thread.h 00:08:00.153 TEST_HEADER include/spdk/trace.h 00:08:00.153 TEST_HEADER include/spdk/trace_parser.h 00:08:00.153 TEST_HEADER include/spdk/tree.h 00:08:00.153 TEST_HEADER include/spdk/ublk.h 00:08:00.153 TEST_HEADER include/spdk/util.h 00:08:00.153 TEST_HEADER include/spdk/uuid.h 00:08:00.153 TEST_HEADER include/spdk/version.h 00:08:00.153 TEST_HEADER include/spdk/vfio_user_pci.h 00:08:00.153 TEST_HEADER include/spdk/vfio_user_spec.h 00:08:00.153 TEST_HEADER include/spdk/vhost.h 00:08:00.153 TEST_HEADER include/spdk/vmd.h 00:08:00.153 TEST_HEADER include/spdk/xor.h 00:08:00.153 TEST_HEADER include/spdk/zipf.h 00:08:00.153 CXX test/cpp_headers/accel.o 00:08:00.153 LINK interrupt_tgt 00:08:00.153 LINK spdk_trace_record 00:08:00.153 LINK poller_perf 00:08:00.153 LINK zipf 00:08:00.411 LINK ioat_perf 00:08:00.411 LINK bdev_svc 00:08:00.411 LINK spdk_trace 00:08:00.411 CXX test/cpp_headers/accel_module.o 00:08:00.411 CXX test/cpp_headers/assert.o 00:08:00.411 CXX test/cpp_headers/barrier.o 00:08:00.411 CC test/rpc_client/rpc_client_test.o 00:08:00.411 CC examples/ioat/verify/verify.o 00:08:00.670 CC test/event/event_perf/event_perf.o 00:08:00.670 LINK test_dma 00:08:00.670 CXX test/cpp_headers/base64.o 00:08:00.670 LINK rpc_client_test 00:08:00.670 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:08:00.670 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:08:00.670 CC app/nvmf_tgt/nvmf_main.o 00:08:00.670 LINK verify 00:08:00.670 LINK event_perf 00:08:00.670 CC app/iscsi_tgt/iscsi_tgt.o 00:08:00.928 CXX test/cpp_headers/bdev.o 00:08:00.928 LINK mem_callbacks 00:08:00.928 LINK nvmf_tgt 00:08:00.928 CC test/env/vtophys/vtophys.o 00:08:00.928 LINK iscsi_tgt 00:08:00.928 CC app/spdk_tgt/spdk_tgt.o 00:08:01.187 CC test/event/reactor/reactor.o 00:08:01.187 CXX test/cpp_headers/bdev_module.o 00:08:01.187 CC test/event/reactor_perf/reactor_perf.o 00:08:01.187 LINK vtophys 00:08:01.187 CC examples/thread/thread/thread_ex.o 00:08:01.187 LINK nvme_fuzz 00:08:01.187 LINK reactor 00:08:01.187 LINK reactor_perf 00:08:01.445 LINK spdk_tgt 00:08:01.445 CXX test/cpp_headers/bdev_zone.o 00:08:01.445 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:08:01.445 CC test/event/app_repeat/app_repeat.o 00:08:01.445 CC test/env/memory/memory_ut.o 00:08:01.445 LINK thread 00:08:01.445 CC test/accel/dif/dif.o 00:08:01.445 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:08:01.445 LINK env_dpdk_post_init 00:08:01.703 CC test/event/scheduler/scheduler.o 00:08:01.703 CXX test/cpp_headers/bit_array.o 00:08:01.703 CC app/spdk_lspci/spdk_lspci.o 00:08:01.703 LINK app_repeat 00:08:01.703 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:08:01.703 LINK spdk_lspci 00:08:01.703 CXX test/cpp_headers/bit_pool.o 00:08:01.961 CXX test/cpp_headers/blob_bdev.o 00:08:01.961 LINK scheduler 00:08:01.961 CC examples/sock/hello_world/hello_sock.o 00:08:01.961 CC test/env/pci/pci_ut.o 00:08:01.961 CXX test/cpp_headers/blobfs_bdev.o 00:08:02.219 CC app/spdk_nvme_perf/perf.o 00:08:02.219 LINK hello_sock 00:08:02.219 LINK vhost_fuzz 00:08:02.219 CC test/blobfs/mkfs/mkfs.o 00:08:02.219 LINK dif 00:08:02.219 CXX test/cpp_headers/blobfs.o 00:08:02.219 CC test/lvol/esnap/esnap.o 00:08:02.219 LINK pci_ut 00:08:02.477 LINK mkfs 00:08:02.477 CXX test/cpp_headers/blob.o 00:08:02.477 CC examples/vmd/lsvmd/lsvmd.o 00:08:02.477 CC examples/idxd/perf/perf.o 00:08:02.477 LINK iscsi_fuzz 00:08:02.736 CC test/app/histogram_perf/histogram_perf.o 00:08:02.736 CC examples/vmd/led/led.o 00:08:02.736 CXX test/cpp_headers/conf.o 00:08:02.736 LINK lsvmd 00:08:02.736 LINK memory_ut 00:08:02.736 CC app/spdk_nvme_identify/identify.o 00:08:02.736 CC test/app/jsoncat/jsoncat.o 00:08:02.736 LINK histogram_perf 00:08:02.736 LINK led 00:08:02.736 CXX test/cpp_headers/config.o 00:08:02.736 CXX test/cpp_headers/cpuset.o 00:08:02.736 LINK idxd_perf 00:08:02.995 LINK jsoncat 00:08:02.995 CXX test/cpp_headers/crc16.o 00:08:02.995 LINK spdk_nvme_perf 00:08:02.995 CC test/app/stub/stub.o 00:08:02.995 CC examples/accel/perf/accel_perf.o 00:08:02.995 CC examples/fsdev/hello_world/hello_fsdev.o 00:08:03.253 CC test/nvme/aer/aer.o 00:08:03.253 CXX test/cpp_headers/crc32.o 00:08:03.253 CC examples/blob/hello_world/hello_blob.o 00:08:03.253 LINK stub 00:08:03.253 CC app/spdk_nvme_discover/discovery_aer.o 00:08:03.253 CC examples/nvme/hello_world/hello_world.o 00:08:03.253 CXX test/cpp_headers/crc64.o 00:08:03.511 LINK hello_fsdev 00:08:03.511 LINK hello_blob 00:08:03.511 LINK aer 00:08:03.511 LINK hello_world 00:08:03.511 LINK spdk_nvme_discover 00:08:03.511 LINK spdk_nvme_identify 00:08:03.511 CXX test/cpp_headers/dif.o 00:08:03.511 LINK accel_perf 00:08:03.770 CC test/nvme/reset/reset.o 00:08:03.770 CXX test/cpp_headers/dma.o 00:08:03.770 CC test/nvme/sgl/sgl.o 00:08:03.770 CC test/bdev/bdevio/bdevio.o 00:08:03.770 CC test/nvme/e2edp/nvme_dp.o 00:08:03.770 CC examples/nvme/reconnect/reconnect.o 00:08:03.770 CC examples/blob/cli/blobcli.o 00:08:03.770 CC app/spdk_top/spdk_top.o 00:08:03.770 CC test/nvme/overhead/overhead.o 00:08:04.029 CXX test/cpp_headers/endian.o 00:08:04.029 LINK reset 00:08:04.029 LINK sgl 00:08:04.029 LINK nvme_dp 00:08:04.287 LINK bdevio 00:08:04.287 LINK overhead 00:08:04.287 LINK reconnect 00:08:04.287 CXX test/cpp_headers/env_dpdk.o 00:08:04.287 CC test/nvme/err_injection/err_injection.o 00:08:04.288 CC test/nvme/startup/startup.o 00:08:04.288 LINK blobcli 00:08:04.546 CC test/nvme/reserve/reserve.o 00:08:04.546 CC test/nvme/simple_copy/simple_copy.o 00:08:04.546 CC examples/nvme/nvme_manage/nvme_manage.o 00:08:04.546 CC examples/bdev/hello_world/hello_bdev.o 00:08:04.546 CXX test/cpp_headers/env.o 00:08:04.546 LINK err_injection 00:08:04.546 LINK startup 00:08:04.546 CC test/nvme/connect_stress/connect_stress.o 00:08:04.546 CXX test/cpp_headers/event.o 00:08:04.804 LINK reserve 00:08:04.804 LINK simple_copy 00:08:04.804 LINK spdk_top 00:08:04.804 LINK hello_bdev 00:08:04.804 CC test/nvme/boot_partition/boot_partition.o 00:08:04.804 CXX test/cpp_headers/fd_group.o 00:08:04.804 LINK connect_stress 00:08:04.804 CC test/nvme/compliance/nvme_compliance.o 00:08:04.804 CC test/nvme/fused_ordering/fused_ordering.o 00:08:05.062 CC test/nvme/doorbell_aers/doorbell_aers.o 00:08:05.062 LINK boot_partition 00:08:05.062 LINK nvme_manage 00:08:05.062 CC app/vhost/vhost.o 00:08:05.062 CXX test/cpp_headers/fd.o 00:08:05.062 CC test/nvme/fdp/fdp.o 00:08:05.062 CC examples/bdev/bdevperf/bdevperf.o 00:08:05.062 LINK fused_ordering 00:08:05.320 LINK doorbell_aers 00:08:05.320 CXX test/cpp_headers/file.o 00:08:05.320 LINK nvme_compliance 00:08:05.320 LINK vhost 00:08:05.320 CC examples/nvme/arbitration/arbitration.o 00:08:05.320 CC test/nvme/cuse/cuse.o 00:08:05.320 CXX test/cpp_headers/fsdev.o 00:08:05.320 CC app/spdk_dd/spdk_dd.o 00:08:05.602 CC examples/nvme/hotplug/hotplug.o 00:08:05.602 LINK fdp 00:08:05.602 CC app/fio/nvme/fio_plugin.o 00:08:05.602 CXX test/cpp_headers/fsdev_module.o 00:08:05.602 CC app/fio/bdev/fio_plugin.o 00:08:05.602 LINK arbitration 00:08:05.602 LINK hotplug 00:08:05.887 CC examples/nvme/cmb_copy/cmb_copy.o 00:08:05.887 CXX test/cpp_headers/ftl.o 00:08:05.887 CC examples/nvme/abort/abort.o 00:08:05.887 LINK spdk_dd 00:08:05.887 LINK cmb_copy 00:08:05.887 LINK bdevperf 00:08:05.887 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:08:06.145 CXX test/cpp_headers/fuse_dispatcher.o 00:08:06.145 LINK spdk_bdev 00:08:06.145 CXX test/cpp_headers/gpt_spec.o 00:08:06.145 LINK spdk_nvme 00:08:06.145 CXX test/cpp_headers/hexlify.o 00:08:06.145 CXX test/cpp_headers/histogram_data.o 00:08:06.145 LINK pmr_persistence 00:08:06.145 CXX test/cpp_headers/idxd.o 00:08:06.145 CXX test/cpp_headers/idxd_spec.o 00:08:06.145 CXX test/cpp_headers/init.o 00:08:06.403 CXX test/cpp_headers/ioat.o 00:08:06.403 LINK abort 00:08:06.403 CXX test/cpp_headers/ioat_spec.o 00:08:06.403 CXX test/cpp_headers/iscsi_spec.o 00:08:06.403 CXX test/cpp_headers/json.o 00:08:06.403 CXX test/cpp_headers/jsonrpc.o 00:08:06.403 CXX test/cpp_headers/keyring.o 00:08:06.403 CXX test/cpp_headers/keyring_module.o 00:08:06.403 CXX test/cpp_headers/likely.o 00:08:06.403 CXX test/cpp_headers/log.o 00:08:06.661 CXX test/cpp_headers/lvol.o 00:08:06.661 CXX test/cpp_headers/md5.o 00:08:06.661 CXX test/cpp_headers/memory.o 00:08:06.661 CXX test/cpp_headers/mmio.o 00:08:06.661 CXX test/cpp_headers/nbd.o 00:08:06.661 CXX test/cpp_headers/net.o 00:08:06.661 CXX test/cpp_headers/notify.o 00:08:06.661 CC examples/nvmf/nvmf/nvmf.o 00:08:06.661 CXX test/cpp_headers/nvme.o 00:08:06.661 LINK cuse 00:08:06.661 CXX test/cpp_headers/nvme_intel.o 00:08:06.920 CXX test/cpp_headers/nvme_ocssd.o 00:08:06.920 CXX test/cpp_headers/nvme_ocssd_spec.o 00:08:06.920 CXX test/cpp_headers/nvme_spec.o 00:08:06.920 CXX test/cpp_headers/nvme_zns.o 00:08:06.920 CXX test/cpp_headers/nvmf_cmd.o 00:08:06.920 CXX test/cpp_headers/nvmf_fc_spec.o 00:08:06.920 CXX test/cpp_headers/nvmf.o 00:08:06.920 CXX test/cpp_headers/nvmf_spec.o 00:08:06.920 LINK nvmf 00:08:06.920 CXX test/cpp_headers/nvmf_transport.o 00:08:06.920 CXX test/cpp_headers/opal.o 00:08:06.920 CXX test/cpp_headers/opal_spec.o 00:08:07.179 CXX test/cpp_headers/pci_ids.o 00:08:07.179 CXX test/cpp_headers/pipe.o 00:08:07.179 CXX test/cpp_headers/queue.o 00:08:07.179 CXX test/cpp_headers/reduce.o 00:08:07.179 CXX test/cpp_headers/rpc.o 00:08:07.179 CXX test/cpp_headers/scheduler.o 00:08:07.179 CXX test/cpp_headers/scsi.o 00:08:07.179 CXX test/cpp_headers/scsi_spec.o 00:08:07.179 CXX test/cpp_headers/sock.o 00:08:07.179 CXX test/cpp_headers/stdinc.o 00:08:07.179 CXX test/cpp_headers/string.o 00:08:07.179 CXX test/cpp_headers/thread.o 00:08:07.179 CXX test/cpp_headers/trace.o 00:08:07.179 CXX test/cpp_headers/trace_parser.o 00:08:07.438 CXX test/cpp_headers/tree.o 00:08:07.438 CXX test/cpp_headers/ublk.o 00:08:07.438 CXX test/cpp_headers/util.o 00:08:07.438 CXX test/cpp_headers/uuid.o 00:08:07.438 CXX test/cpp_headers/version.o 00:08:07.438 CXX test/cpp_headers/vfio_user_pci.o 00:08:07.438 CXX test/cpp_headers/vfio_user_spec.o 00:08:07.439 CXX test/cpp_headers/vhost.o 00:08:07.439 CXX test/cpp_headers/vmd.o 00:08:07.439 CXX test/cpp_headers/xor.o 00:08:07.439 CXX test/cpp_headers/zipf.o 00:08:07.439 LINK esnap 00:08:08.006 00:08:08.006 real 1m32.984s 00:08:08.006 user 8m33.024s 00:08:08.006 sys 1m43.009s 00:08:08.006 ************************************ 00:08:08.006 END TEST make 00:08:08.006 ************************************ 00:08:08.006 06:03:12 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:08:08.006 06:03:12 make -- common/autotest_common.sh@10 -- $ set +x 00:08:08.006 06:03:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:08:08.006 06:03:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:08.006 06:03:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:08.006 06:03:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:08.006 06:03:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:08:08.006 06:03:12 -- pm/common@44 -- $ pid=5419 00:08:08.006 06:03:12 -- pm/common@50 -- $ kill -TERM 5419 00:08:08.006 06:03:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:08.006 06:03:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:08:08.006 06:03:12 -- pm/common@44 -- $ pid=5421 00:08:08.006 06:03:12 -- pm/common@50 -- $ kill -TERM 5421 00:08:08.007 06:03:12 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:08:08.007 06:03:12 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:08.007 06:03:13 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:08.007 06:03:13 -- common/autotest_common.sh@1693 -- # lcov --version 00:08:08.007 06:03:13 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:08.265 06:03:13 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:08.265 06:03:13 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.265 06:03:13 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.265 06:03:13 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.265 06:03:13 -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.265 06:03:13 -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.265 06:03:13 -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.265 06:03:13 -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.265 06:03:13 -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.265 06:03:13 -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.265 06:03:13 -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.265 06:03:13 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.265 06:03:13 -- scripts/common.sh@344 -- # case "$op" in 00:08:08.265 06:03:13 -- scripts/common.sh@345 -- # : 1 00:08:08.265 06:03:13 -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.265 06:03:13 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.265 06:03:13 -- scripts/common.sh@365 -- # decimal 1 00:08:08.265 06:03:13 -- scripts/common.sh@353 -- # local d=1 00:08:08.265 06:03:13 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.265 06:03:13 -- scripts/common.sh@355 -- # echo 1 00:08:08.265 06:03:13 -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.265 06:03:13 -- scripts/common.sh@366 -- # decimal 2 00:08:08.265 06:03:13 -- scripts/common.sh@353 -- # local d=2 00:08:08.265 06:03:13 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.265 06:03:13 -- scripts/common.sh@355 -- # echo 2 00:08:08.265 06:03:13 -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.265 06:03:13 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.265 06:03:13 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.265 06:03:13 -- scripts/common.sh@368 -- # return 0 00:08:08.265 06:03:13 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.265 06:03:13 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:08.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.265 --rc genhtml_branch_coverage=1 00:08:08.265 --rc genhtml_function_coverage=1 00:08:08.265 --rc genhtml_legend=1 00:08:08.265 --rc geninfo_all_blocks=1 00:08:08.265 --rc geninfo_unexecuted_blocks=1 00:08:08.265 00:08:08.265 ' 00:08:08.265 06:03:13 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:08.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.266 --rc genhtml_branch_coverage=1 00:08:08.266 --rc genhtml_function_coverage=1 00:08:08.266 --rc genhtml_legend=1 00:08:08.266 --rc geninfo_all_blocks=1 00:08:08.266 --rc geninfo_unexecuted_blocks=1 00:08:08.266 00:08:08.266 ' 00:08:08.266 06:03:13 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:08.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.266 --rc genhtml_branch_coverage=1 00:08:08.266 --rc genhtml_function_coverage=1 00:08:08.266 --rc genhtml_legend=1 00:08:08.266 --rc geninfo_all_blocks=1 00:08:08.266 --rc geninfo_unexecuted_blocks=1 00:08:08.266 00:08:08.266 ' 00:08:08.266 06:03:13 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:08.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.266 --rc genhtml_branch_coverage=1 00:08:08.266 --rc genhtml_function_coverage=1 00:08:08.266 --rc genhtml_legend=1 00:08:08.266 --rc geninfo_all_blocks=1 00:08:08.266 --rc geninfo_unexecuted_blocks=1 00:08:08.266 00:08:08.266 ' 00:08:08.266 06:03:13 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:08.266 06:03:13 -- nvmf/common.sh@7 -- # uname -s 00:08:08.266 06:03:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.266 06:03:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.266 06:03:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.266 06:03:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.266 06:03:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.266 06:03:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.266 06:03:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.266 06:03:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.266 06:03:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.266 06:03:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.266 06:03:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:08:08.266 06:03:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:08:08.266 06:03:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.266 06:03:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.266 06:03:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:08.266 06:03:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.266 06:03:13 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:08.266 06:03:13 -- scripts/common.sh@15 -- # shopt -s extglob 00:08:08.266 06:03:13 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.266 06:03:13 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.266 06:03:13 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.266 06:03:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.266 06:03:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.266 06:03:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.266 06:03:13 -- paths/export.sh@5 -- # export PATH 00:08:08.266 06:03:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.266 06:03:13 -- nvmf/common.sh@51 -- # : 0 00:08:08.266 06:03:13 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:08.266 06:03:13 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:08.266 06:03:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.266 06:03:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.266 06:03:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.266 06:03:13 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:08.266 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:08.266 06:03:13 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:08.266 06:03:13 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:08.266 06:03:13 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:08.266 06:03:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:08:08.266 06:03:13 -- spdk/autotest.sh@32 -- # uname -s 00:08:08.266 06:03:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:08:08.266 06:03:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:08:08.266 06:03:13 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:08.266 06:03:13 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:08:08.266 06:03:13 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:08.266 06:03:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:08:08.266 06:03:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:08:08.266 06:03:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:08:08.266 06:03:13 -- spdk/autotest.sh@48 -- # udevadm_pid=54567 00:08:08.266 06:03:13 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:08:08.266 06:03:13 -- pm/common@17 -- # local monitor 00:08:08.266 06:03:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:08.266 06:03:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:08:08.266 06:03:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:08.266 06:03:13 -- pm/common@25 -- # sleep 1 00:08:08.266 06:03:13 -- pm/common@21 -- # date +%s 00:08:08.266 06:03:13 -- pm/common@21 -- # date +%s 00:08:08.266 06:03:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732687393 00:08:08.266 06:03:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732687393 00:08:08.266 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732687393_collect-cpu-load.pm.log 00:08:08.266 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732687393_collect-vmstat.pm.log 00:08:09.201 06:03:14 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:09.201 06:03:14 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:09.201 06:03:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:09.201 06:03:14 -- common/autotest_common.sh@10 -- # set +x 00:08:09.201 06:03:14 -- spdk/autotest.sh@59 -- # create_test_list 00:08:09.201 06:03:14 -- common/autotest_common.sh@752 -- # xtrace_disable 00:08:09.201 06:03:14 -- common/autotest_common.sh@10 -- # set +x 00:08:09.460 06:03:14 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:08:09.460 06:03:14 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:08:09.460 06:03:14 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:08:09.460 06:03:14 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:08:09.460 06:03:14 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:08:09.460 06:03:14 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:09.460 06:03:14 -- common/autotest_common.sh@1457 -- # uname 00:08:09.460 06:03:14 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:08:09.460 06:03:14 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:09.460 06:03:14 -- common/autotest_common.sh@1477 -- # uname 00:08:09.460 06:03:14 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:08:09.460 06:03:14 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:08:09.460 06:03:14 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:08:09.460 lcov: LCOV version 1.15 00:08:09.460 06:03:14 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:08:27.568 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:27.568 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:08:45.786 06:03:49 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:45.786 06:03:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:45.786 06:03:49 -- common/autotest_common.sh@10 -- # set +x 00:08:45.786 06:03:49 -- spdk/autotest.sh@78 -- # rm -f 00:08:45.786 06:03:49 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:45.786 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:45.786 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:08:45.786 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:08:45.786 06:03:50 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:45.786 06:03:50 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:45.786 06:03:50 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:45.786 06:03:50 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:08:45.786 06:03:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:45.786 06:03:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:08:45.786 06:03:50 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:45.787 06:03:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:45.787 06:03:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:45.787 06:03:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:45.787 06:03:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:08:45.787 06:03:50 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:08:45.787 06:03:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:45.787 06:03:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:45.787 06:03:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:45.787 06:03:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:08:45.787 06:03:50 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:08:45.787 06:03:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:45.787 06:03:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:45.787 06:03:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:45.787 06:03:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:08:45.787 06:03:50 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:08:45.787 06:03:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:45.787 06:03:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:45.787 06:03:50 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:45.787 06:03:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:45.787 06:03:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:45.787 06:03:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:45.787 06:03:50 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:45.787 06:03:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:45.787 No valid GPT data, bailing 00:08:45.787 06:03:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:45.787 06:03:50 -- scripts/common.sh@394 -- # pt= 00:08:45.787 06:03:50 -- scripts/common.sh@395 -- # return 1 00:08:45.787 06:03:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:45.787 1+0 records in 00:08:45.787 1+0 records out 00:08:45.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00451017 s, 232 MB/s 00:08:45.787 06:03:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:45.787 06:03:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:45.787 06:03:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:08:45.787 06:03:50 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:08:45.787 06:03:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:08:45.787 No valid GPT data, bailing 00:08:45.787 06:03:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:45.787 06:03:50 -- scripts/common.sh@394 -- # pt= 00:08:45.787 06:03:50 -- scripts/common.sh@395 -- # return 1 00:08:45.787 06:03:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:08:45.787 1+0 records in 00:08:45.787 1+0 records out 00:08:45.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00517538 s, 203 MB/s 00:08:45.787 06:03:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:45.787 06:03:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:45.787 06:03:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:08:45.787 06:03:50 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:08:45.787 06:03:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:08:45.787 No valid GPT data, bailing 00:08:45.787 06:03:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:08:45.787 06:03:50 -- scripts/common.sh@394 -- # pt= 00:08:45.787 06:03:50 -- scripts/common.sh@395 -- # return 1 00:08:45.787 06:03:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:08:45.787 1+0 records in 00:08:45.787 1+0 records out 00:08:45.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00596386 s, 176 MB/s 00:08:45.787 06:03:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:45.787 06:03:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:45.787 06:03:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:08:45.787 06:03:50 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:08:45.787 06:03:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:08:45.787 No valid GPT data, bailing 00:08:45.787 06:03:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:08:45.787 06:03:50 -- scripts/common.sh@394 -- # pt= 00:08:45.787 06:03:50 -- scripts/common.sh@395 -- # return 1 00:08:45.787 06:03:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:08:45.787 1+0 records in 00:08:45.787 1+0 records out 00:08:45.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00532633 s, 197 MB/s 00:08:45.787 06:03:50 -- spdk/autotest.sh@105 -- # sync 00:08:45.787 06:03:50 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:45.787 06:03:50 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:45.787 06:03:50 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:48.359 06:03:52 -- spdk/autotest.sh@111 -- # uname -s 00:08:48.359 06:03:52 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:48.359 06:03:52 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:48.359 06:03:52 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:48.616 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:48.616 Hugepages 00:08:48.616 node hugesize free / total 00:08:48.616 node0 1048576kB 0 / 0 00:08:48.616 node0 2048kB 0 / 0 00:08:48.616 00:08:48.616 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:48.616 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:48.874 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:48.874 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:08:48.874 06:03:53 -- spdk/autotest.sh@117 -- # uname -s 00:08:48.874 06:03:53 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:48.874 06:03:53 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:48.874 06:03:53 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:49.440 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:49.440 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:49.440 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:49.698 06:03:54 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:50.633 06:03:55 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:50.633 06:03:55 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:50.633 06:03:55 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:08:50.633 06:03:55 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:08:50.633 06:03:55 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:50.633 06:03:55 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:50.633 06:03:55 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:50.633 06:03:55 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:50.633 06:03:55 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:50.633 06:03:55 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:08:50.633 06:03:55 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:50.633 06:03:55 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:50.892 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:51.282 Waiting for block devices as requested 00:08:51.282 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:51.282 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:51.282 06:03:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:51.282 06:03:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:08:51.282 06:03:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:51.282 06:03:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:08:51.283 06:03:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:51.283 06:03:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:08:51.283 06:03:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:51.283 06:03:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:08:51.283 06:03:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:08:51.283 06:03:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:08:51.283 06:03:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:51.283 06:03:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:08:51.283 06:03:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:51.283 06:03:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:51.283 06:03:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:51.283 06:03:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:51.283 06:03:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:51.283 06:03:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:08:51.283 06:03:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:51.283 06:03:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:51.283 06:03:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:51.283 06:03:56 -- common/autotest_common.sh@1543 -- # continue 00:08:51.283 06:03:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:51.283 06:03:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:08:51.283 06:03:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:51.283 06:03:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:08:51.283 06:03:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:51.283 06:03:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:08:51.283 06:03:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:51.283 06:03:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:08:51.283 06:03:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:08:51.283 06:03:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:08:51.283 06:03:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:08:51.283 06:03:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:51.283 06:03:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:51.283 06:03:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:51.283 06:03:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:51.283 06:03:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:51.283 06:03:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:51.283 06:03:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:51.283 06:03:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:51.283 06:03:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:51.283 06:03:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:51.283 06:03:56 -- common/autotest_common.sh@1543 -- # continue 00:08:51.283 06:03:56 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:51.283 06:03:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:51.283 06:03:56 -- common/autotest_common.sh@10 -- # set +x 00:08:51.570 06:03:56 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:51.570 06:03:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:51.570 06:03:56 -- common/autotest_common.sh@10 -- # set +x 00:08:51.570 06:03:56 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:52.140 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:52.140 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:52.140 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:52.399 06:03:57 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:52.399 06:03:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:52.399 06:03:57 -- common/autotest_common.sh@10 -- # set +x 00:08:52.399 06:03:57 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:52.399 06:03:57 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:08:52.399 06:03:57 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:08:52.399 06:03:57 -- common/autotest_common.sh@1563 -- # bdfs=() 00:08:52.399 06:03:57 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:08:52.399 06:03:57 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:08:52.399 06:03:57 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:08:52.399 06:03:57 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:08:52.399 06:03:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:52.399 06:03:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:52.399 06:03:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:52.399 06:03:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:52.399 06:03:57 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:52.399 06:03:57 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:08:52.399 06:03:57 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:52.399 06:03:57 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:52.400 06:03:57 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:08:52.400 06:03:57 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:52.400 06:03:57 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:52.400 06:03:57 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:52.400 06:03:57 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:08:52.400 06:03:57 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:52.400 06:03:57 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:52.400 06:03:57 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:08:52.400 06:03:57 -- common/autotest_common.sh@1572 -- # return 0 00:08:52.400 06:03:57 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:08:52.400 06:03:57 -- common/autotest_common.sh@1580 -- # return 0 00:08:52.400 06:03:57 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:52.400 06:03:57 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:52.400 06:03:57 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:52.400 06:03:57 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:52.400 06:03:57 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:52.400 06:03:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:52.400 06:03:57 -- common/autotest_common.sh@10 -- # set +x 00:08:52.400 06:03:57 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:08:52.400 06:03:57 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:08:52.400 06:03:57 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:08:52.400 06:03:57 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:52.400 06:03:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.400 06:03:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.400 06:03:57 -- common/autotest_common.sh@10 -- # set +x 00:08:52.400 ************************************ 00:08:52.400 START TEST env 00:08:52.400 ************************************ 00:08:52.400 06:03:57 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:52.400 * Looking for test storage... 00:08:52.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:52.400 06:03:57 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:52.400 06:03:57 env -- common/autotest_common.sh@1693 -- # lcov --version 00:08:52.400 06:03:57 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:52.659 06:03:57 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:52.659 06:03:57 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.659 06:03:57 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.659 06:03:57 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.659 06:03:57 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.659 06:03:57 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.659 06:03:57 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.659 06:03:57 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.659 06:03:57 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.659 06:03:57 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.659 06:03:57 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.659 06:03:57 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.659 06:03:57 env -- scripts/common.sh@344 -- # case "$op" in 00:08:52.659 06:03:57 env -- scripts/common.sh@345 -- # : 1 00:08:52.659 06:03:57 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.659 06:03:57 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.659 06:03:57 env -- scripts/common.sh@365 -- # decimal 1 00:08:52.659 06:03:57 env -- scripts/common.sh@353 -- # local d=1 00:08:52.659 06:03:57 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.659 06:03:57 env -- scripts/common.sh@355 -- # echo 1 00:08:52.659 06:03:57 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.659 06:03:57 env -- scripts/common.sh@366 -- # decimal 2 00:08:52.659 06:03:57 env -- scripts/common.sh@353 -- # local d=2 00:08:52.659 06:03:57 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.659 06:03:57 env -- scripts/common.sh@355 -- # echo 2 00:08:52.659 06:03:57 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.659 06:03:57 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.659 06:03:57 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.659 06:03:57 env -- scripts/common.sh@368 -- # return 0 00:08:52.659 06:03:57 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.659 06:03:57 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:52.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.659 --rc genhtml_branch_coverage=1 00:08:52.659 --rc genhtml_function_coverage=1 00:08:52.659 --rc genhtml_legend=1 00:08:52.659 --rc geninfo_all_blocks=1 00:08:52.659 --rc geninfo_unexecuted_blocks=1 00:08:52.659 00:08:52.659 ' 00:08:52.659 06:03:57 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:52.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.659 --rc genhtml_branch_coverage=1 00:08:52.659 --rc genhtml_function_coverage=1 00:08:52.659 --rc genhtml_legend=1 00:08:52.659 --rc geninfo_all_blocks=1 00:08:52.659 --rc geninfo_unexecuted_blocks=1 00:08:52.659 00:08:52.659 ' 00:08:52.659 06:03:57 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:52.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.659 --rc genhtml_branch_coverage=1 00:08:52.659 --rc genhtml_function_coverage=1 00:08:52.659 --rc genhtml_legend=1 00:08:52.659 --rc geninfo_all_blocks=1 00:08:52.659 --rc geninfo_unexecuted_blocks=1 00:08:52.659 00:08:52.659 ' 00:08:52.659 06:03:57 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:52.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.659 --rc genhtml_branch_coverage=1 00:08:52.659 --rc genhtml_function_coverage=1 00:08:52.659 --rc genhtml_legend=1 00:08:52.659 --rc geninfo_all_blocks=1 00:08:52.659 --rc geninfo_unexecuted_blocks=1 00:08:52.659 00:08:52.659 ' 00:08:52.659 06:03:57 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:52.659 06:03:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.659 06:03:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.659 06:03:57 env -- common/autotest_common.sh@10 -- # set +x 00:08:52.659 ************************************ 00:08:52.659 START TEST env_memory 00:08:52.659 ************************************ 00:08:52.659 06:03:57 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:52.659 00:08:52.659 00:08:52.659 CUnit - A unit testing framework for C - Version 2.1-3 00:08:52.659 http://cunit.sourceforge.net/ 00:08:52.659 00:08:52.659 00:08:52.659 Suite: memory 00:08:52.659 Test: alloc and free memory map ...[2024-11-27 06:03:57.635442] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:52.659 passed 00:08:52.659 Test: mem map translation ...[2024-11-27 06:03:57.668977] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:52.659 [2024-11-27 06:03:57.669035] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:52.659 [2024-11-27 06:03:57.669092] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:52.659 [2024-11-27 06:03:57.669103] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:52.659 passed 00:08:52.659 Test: mem map registration ...[2024-11-27 06:03:57.735820] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:52.659 [2024-11-27 06:03:57.735883] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:52.919 passed 00:08:52.919 Test: mem map adjacent registrations ...passed 00:08:52.919 00:08:52.919 Run Summary: Type Total Ran Passed Failed Inactive 00:08:52.919 suites 1 1 n/a 0 0 00:08:52.919 tests 4 4 4 0 0 00:08:52.919 asserts 152 152 152 0 n/a 00:08:52.919 00:08:52.919 Elapsed time = 0.223 seconds 00:08:52.919 00:08:52.919 real 0m0.243s 00:08:52.919 user 0m0.225s 00:08:52.919 sys 0m0.013s 00:08:52.919 06:03:57 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.919 06:03:57 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:52.919 ************************************ 00:08:52.919 END TEST env_memory 00:08:52.919 ************************************ 00:08:52.919 06:03:57 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:52.919 06:03:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.919 06:03:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.919 06:03:57 env -- common/autotest_common.sh@10 -- # set +x 00:08:52.919 ************************************ 00:08:52.919 START TEST env_vtophys 00:08:52.919 ************************************ 00:08:52.919 06:03:57 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:52.919 EAL: lib.eal log level changed from notice to debug 00:08:52.919 EAL: Detected lcore 0 as core 0 on socket 0 00:08:52.919 EAL: Detected lcore 1 as core 0 on socket 0 00:08:52.919 EAL: Detected lcore 2 as core 0 on socket 0 00:08:52.919 EAL: Detected lcore 3 as core 0 on socket 0 00:08:52.919 EAL: Detected lcore 4 as core 0 on socket 0 00:08:52.919 EAL: Detected lcore 5 as core 0 on socket 0 00:08:52.919 EAL: Detected lcore 6 as core 0 on socket 0 00:08:52.919 EAL: Detected lcore 7 as core 0 on socket 0 00:08:52.919 EAL: Detected lcore 8 as core 0 on socket 0 00:08:52.919 EAL: Detected lcore 9 as core 0 on socket 0 00:08:52.919 EAL: Maximum logical cores by configuration: 128 00:08:52.919 EAL: Detected CPU lcores: 10 00:08:52.919 EAL: Detected NUMA nodes: 1 00:08:52.919 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:52.919 EAL: Detected shared linkage of DPDK 00:08:52.919 EAL: No shared files mode enabled, IPC will be disabled 00:08:52.919 EAL: Selected IOVA mode 'PA' 00:08:52.919 EAL: Probing VFIO support... 00:08:52.919 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:52.919 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:52.919 EAL: Ask a virtual area of 0x2e000 bytes 00:08:52.919 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:52.919 EAL: Setting up physically contiguous memory... 00:08:52.919 EAL: Setting maximum number of open files to 524288 00:08:52.919 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:52.919 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:52.919 EAL: Ask a virtual area of 0x61000 bytes 00:08:52.919 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:52.919 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:52.919 EAL: Ask a virtual area of 0x400000000 bytes 00:08:52.919 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:52.919 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:52.919 EAL: Ask a virtual area of 0x61000 bytes 00:08:52.919 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:52.919 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:52.919 EAL: Ask a virtual area of 0x400000000 bytes 00:08:52.919 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:52.919 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:52.919 EAL: Ask a virtual area of 0x61000 bytes 00:08:52.919 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:52.919 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:52.919 EAL: Ask a virtual area of 0x400000000 bytes 00:08:52.919 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:52.919 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:52.919 EAL: Ask a virtual area of 0x61000 bytes 00:08:52.919 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:52.919 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:52.919 EAL: Ask a virtual area of 0x400000000 bytes 00:08:52.919 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:52.919 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:52.919 EAL: Hugepages will be freed exactly as allocated. 00:08:52.919 EAL: No shared files mode enabled, IPC is disabled 00:08:52.919 EAL: No shared files mode enabled, IPC is disabled 00:08:53.179 EAL: TSC frequency is ~2200000 KHz 00:08:53.179 EAL: Main lcore 0 is ready (tid=7fe3f6c5ea00;cpuset=[0]) 00:08:53.179 EAL: Trying to obtain current memory policy. 00:08:53.179 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:53.179 EAL: Restoring previous memory policy: 0 00:08:53.179 EAL: request: mp_malloc_sync 00:08:53.179 EAL: No shared files mode enabled, IPC is disabled 00:08:53.179 EAL: Heap on socket 0 was expanded by 2MB 00:08:53.179 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:53.179 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:53.179 EAL: Mem event callback 'spdk:(nil)' registered 00:08:53.179 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:53.179 00:08:53.179 00:08:53.179 CUnit - A unit testing framework for C - Version 2.1-3 00:08:53.179 http://cunit.sourceforge.net/ 00:08:53.179 00:08:53.179 00:08:53.179 Suite: components_suite 00:08:53.179 Test: vtophys_malloc_test ...passed 00:08:53.179 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:53.179 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:53.179 EAL: Restoring previous memory policy: 4 00:08:53.179 EAL: Calling mem event callback 'spdk:(nil)' 00:08:53.179 EAL: request: mp_malloc_sync 00:08:53.179 EAL: No shared files mode enabled, IPC is disabled 00:08:53.179 EAL: Heap on socket 0 was expanded by 4MB 00:08:53.179 EAL: Calling mem event callback 'spdk:(nil)' 00:08:53.179 EAL: request: mp_malloc_sync 00:08:53.179 EAL: No shared files mode enabled, IPC is disabled 00:08:53.179 EAL: Heap on socket 0 was shrunk by 4MB 00:08:53.179 EAL: Trying to obtain current memory policy. 00:08:53.179 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:53.179 EAL: Restoring previous memory policy: 4 00:08:53.179 EAL: Calling mem event callback 'spdk:(nil)' 00:08:53.179 EAL: request: mp_malloc_sync 00:08:53.179 EAL: No shared files mode enabled, IPC is disabled 00:08:53.179 EAL: Heap on socket 0 was expanded by 6MB 00:08:53.179 EAL: Calling mem event callback 'spdk:(nil)' 00:08:53.179 EAL: request: mp_malloc_sync 00:08:53.179 EAL: No shared files mode enabled, IPC is disabled 00:08:53.179 EAL: Heap on socket 0 was shrunk by 6MB 00:08:53.179 EAL: Trying to obtain current memory policy. 00:08:53.179 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:53.179 EAL: Restoring previous memory policy: 4 00:08:53.179 EAL: Calling mem event callback 'spdk:(nil)' 00:08:53.179 EAL: request: mp_malloc_sync 00:08:53.179 EAL: No shared files mode enabled, IPC is disabled 00:08:53.179 EAL: Heap on socket 0 was expanded by 10MB 00:08:53.179 EAL: Calling mem event callback 'spdk:(nil)' 00:08:53.179 EAL: request: mp_malloc_sync 00:08:53.179 EAL: No shared files mode enabled, IPC is disabled 00:08:53.179 EAL: Heap on socket 0 was shrunk by 10MB 00:08:53.179 EAL: Trying to obtain current memory policy. 00:08:53.179 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:53.179 EAL: Restoring previous memory policy: 4 00:08:53.179 EAL: Calling mem event callback 'spdk:(nil)' 00:08:53.179 EAL: request: mp_malloc_sync 00:08:53.179 EAL: No shared files mode enabled, IPC is disabled 00:08:53.179 EAL: Heap on socket 0 was expanded by 18MB 00:08:53.179 EAL: Calling mem event callback 'spdk:(nil)' 00:08:53.179 EAL: request: mp_malloc_sync 00:08:53.179 EAL: No shared files mode enabled, IPC is disabled 00:08:53.179 EAL: Heap on socket 0 was shrunk by 18MB 00:08:53.179 EAL: Trying to obtain current memory policy. 00:08:53.179 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:53.179 EAL: Restoring previous memory policy: 4 00:08:53.179 EAL: Calling mem event callback 'spdk:(nil)' 00:08:53.179 EAL: request: mp_malloc_sync 00:08:53.179 EAL: No shared files mode enabled, IPC is disabled 00:08:53.179 EAL: Heap on socket 0 was expanded by 34MB 00:08:53.179 EAL: Calling mem event callback 'spdk:(nil)' 00:08:53.179 EAL: request: mp_malloc_sync 00:08:53.179 EAL: No shared files mode enabled, IPC is disabled 00:08:53.179 EAL: Heap on socket 0 was shrunk by 34MB 00:08:53.179 EAL: Trying to obtain current memory policy. 00:08:53.179 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:53.179 EAL: Restoring previous memory policy: 4 00:08:53.179 EAL: Calling mem event callback 'spdk:(nil)' 00:08:53.179 EAL: request: mp_malloc_sync 00:08:53.179 EAL: No shared files mode enabled, IPC is disabled 00:08:53.179 EAL: Heap on socket 0 was expanded by 66MB 00:08:53.179 EAL: Calling mem event callback 'spdk:(nil)' 00:08:53.179 EAL: request: mp_malloc_sync 00:08:53.179 EAL: No shared files mode enabled, IPC is disabled 00:08:53.179 EAL: Heap on socket 0 was shrunk by 66MB 00:08:53.179 EAL: Trying to obtain current memory policy. 00:08:53.179 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:53.179 EAL: Restoring previous memory policy: 4 00:08:53.179 EAL: Calling mem event callback 'spdk:(nil)' 00:08:53.179 EAL: request: mp_malloc_sync 00:08:53.179 EAL: No shared files mode enabled, IPC is disabled 00:08:53.179 EAL: Heap on socket 0 was expanded by 130MB 00:08:53.179 EAL: Calling mem event callback 'spdk:(nil)' 00:08:53.179 EAL: request: mp_malloc_sync 00:08:53.179 EAL: No shared files mode enabled, IPC is disabled 00:08:53.179 EAL: Heap on socket 0 was shrunk by 130MB 00:08:53.179 EAL: Trying to obtain current memory policy. 00:08:53.179 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:53.441 EAL: Restoring previous memory policy: 4 00:08:53.441 EAL: Calling mem event callback 'spdk:(nil)' 00:08:53.441 EAL: request: mp_malloc_sync 00:08:53.441 EAL: No shared files mode enabled, IPC is disabled 00:08:53.441 EAL: Heap on socket 0 was expanded by 258MB 00:08:53.441 EAL: Calling mem event callback 'spdk:(nil)' 00:08:53.441 EAL: request: mp_malloc_sync 00:08:53.441 EAL: No shared files mode enabled, IPC is disabled 00:08:53.441 EAL: Heap on socket 0 was shrunk by 258MB 00:08:53.441 EAL: Trying to obtain current memory policy. 00:08:53.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:53.700 EAL: Restoring previous memory policy: 4 00:08:53.700 EAL: Calling mem event callback 'spdk:(nil)' 00:08:53.700 EAL: request: mp_malloc_sync 00:08:53.700 EAL: No shared files mode enabled, IPC is disabled 00:08:53.700 EAL: Heap on socket 0 was expanded by 514MB 00:08:53.700 EAL: Calling mem event callback 'spdk:(nil)' 00:08:53.700 EAL: request: mp_malloc_sync 00:08:53.700 EAL: No shared files mode enabled, IPC is disabled 00:08:53.700 EAL: Heap on socket 0 was shrunk by 514MB 00:08:53.700 EAL: Trying to obtain current memory policy. 00:08:53.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:53.959 EAL: Restoring previous memory policy: 4 00:08:53.959 EAL: Calling mem event callback 'spdk:(nil)' 00:08:53.959 EAL: request: mp_malloc_sync 00:08:53.959 EAL: No shared files mode enabled, IPC is disabled 00:08:53.959 EAL: Heap on socket 0 was expanded by 1026MB 00:08:54.219 EAL: Calling mem event callback 'spdk:(nil)' 00:08:54.478 passed 00:08:54.478 00:08:54.478 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.478 suites 1 1 n/a 0 0 00:08:54.478 tests 2 2 2 0 0 00:08:54.478 asserts 5400 5400 5400 0 n/a 00:08:54.478 00:08:54.478 Elapsed time = 1.360 seconds 00:08:54.478 EAL: request: mp_malloc_sync 00:08:54.478 EAL: No shared files mode enabled, IPC is disabled 00:08:54.478 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:54.478 EAL: Calling mem event callback 'spdk:(nil)' 00:08:54.478 EAL: request: mp_malloc_sync 00:08:54.478 EAL: No shared files mode enabled, IPC is disabled 00:08:54.478 EAL: Heap on socket 0 was shrunk by 2MB 00:08:54.478 EAL: No shared files mode enabled, IPC is disabled 00:08:54.478 EAL: No shared files mode enabled, IPC is disabled 00:08:54.478 EAL: No shared files mode enabled, IPC is disabled 00:08:54.478 00:08:54.478 real 0m1.584s 00:08:54.478 user 0m0.870s 00:08:54.478 sys 0m0.577s 00:08:54.478 06:03:59 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.478 06:03:59 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:54.478 ************************************ 00:08:54.478 END TEST env_vtophys 00:08:54.478 ************************************ 00:08:54.478 06:03:59 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:54.478 06:03:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:54.478 06:03:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.478 06:03:59 env -- common/autotest_common.sh@10 -- # set +x 00:08:54.478 ************************************ 00:08:54.478 START TEST env_pci 00:08:54.478 ************************************ 00:08:54.478 06:03:59 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:54.478 00:08:54.478 00:08:54.478 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.478 http://cunit.sourceforge.net/ 00:08:54.478 00:08:54.478 00:08:54.478 Suite: pci 00:08:54.478 Test: pci_hook ...[2024-11-27 06:03:59.538906] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56831 has claimed it 00:08:54.478 passed 00:08:54.478 00:08:54.478 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.478 suites 1 1 n/a 0 0 00:08:54.478 tests 1 1 1 0 0 00:08:54.478 asserts 25 25 25 0 n/a 00:08:54.478 00:08:54.478 Elapsed time = 0.002 seconds 00:08:54.478 EAL: Cannot find device (10000:00:01.0) 00:08:54.478 EAL: Failed to attach device on primary process 00:08:54.478 00:08:54.478 real 0m0.023s 00:08:54.478 user 0m0.009s 00:08:54.478 sys 0m0.013s 00:08:54.478 06:03:59 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.478 06:03:59 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:54.478 ************************************ 00:08:54.478 END TEST env_pci 00:08:54.478 ************************************ 00:08:54.736 06:03:59 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:54.736 06:03:59 env -- env/env.sh@15 -- # uname 00:08:54.737 06:03:59 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:54.737 06:03:59 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:54.737 06:03:59 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:54.737 06:03:59 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:54.737 06:03:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.737 06:03:59 env -- common/autotest_common.sh@10 -- # set +x 00:08:54.737 ************************************ 00:08:54.737 START TEST env_dpdk_post_init 00:08:54.737 ************************************ 00:08:54.737 06:03:59 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:54.737 EAL: Detected CPU lcores: 10 00:08:54.737 EAL: Detected NUMA nodes: 1 00:08:54.737 EAL: Detected shared linkage of DPDK 00:08:54.737 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:54.737 EAL: Selected IOVA mode 'PA' 00:08:54.737 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:54.737 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:08:54.737 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:08:54.737 Starting DPDK initialization... 00:08:54.737 Starting SPDK post initialization... 00:08:54.737 SPDK NVMe probe 00:08:54.737 Attaching to 0000:00:10.0 00:08:54.737 Attaching to 0000:00:11.0 00:08:54.737 Attached to 0000:00:10.0 00:08:54.737 Attached to 0000:00:11.0 00:08:54.737 Cleaning up... 00:08:54.737 00:08:54.737 real 0m0.211s 00:08:54.737 user 0m0.070s 00:08:54.737 sys 0m0.040s 00:08:54.737 06:03:59 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.737 06:03:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:54.737 ************************************ 00:08:54.737 END TEST env_dpdk_post_init 00:08:54.737 ************************************ 00:08:54.995 06:03:59 env -- env/env.sh@26 -- # uname 00:08:54.995 06:03:59 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:54.995 06:03:59 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:54.995 06:03:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:54.995 06:03:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.995 06:03:59 env -- common/autotest_common.sh@10 -- # set +x 00:08:54.995 ************************************ 00:08:54.995 START TEST env_mem_callbacks 00:08:54.995 ************************************ 00:08:54.995 06:03:59 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:54.995 EAL: Detected CPU lcores: 10 00:08:54.995 EAL: Detected NUMA nodes: 1 00:08:54.995 EAL: Detected shared linkage of DPDK 00:08:54.995 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:54.995 EAL: Selected IOVA mode 'PA' 00:08:54.995 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:54.995 00:08:54.995 00:08:54.995 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.995 http://cunit.sourceforge.net/ 00:08:54.995 00:08:54.995 00:08:54.995 Suite: memory 00:08:54.995 Test: test ... 00:08:54.995 register 0x200000200000 2097152 00:08:54.995 malloc 3145728 00:08:54.995 register 0x200000400000 4194304 00:08:54.995 buf 0x200000500000 len 3145728 PASSED 00:08:54.995 malloc 64 00:08:54.995 buf 0x2000004fff40 len 64 PASSED 00:08:54.995 malloc 4194304 00:08:54.995 register 0x200000800000 6291456 00:08:54.995 buf 0x200000a00000 len 4194304 PASSED 00:08:54.995 free 0x200000500000 3145728 00:08:54.995 free 0x2000004fff40 64 00:08:54.995 unregister 0x200000400000 4194304 PASSED 00:08:54.995 free 0x200000a00000 4194304 00:08:54.995 unregister 0x200000800000 6291456 PASSED 00:08:54.995 malloc 8388608 00:08:54.995 register 0x200000400000 10485760 00:08:54.995 buf 0x200000600000 len 8388608 PASSED 00:08:54.995 free 0x200000600000 8388608 00:08:54.995 unregister 0x200000400000 10485760 PASSED 00:08:54.995 passed 00:08:54.995 00:08:54.995 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.995 suites 1 1 n/a 0 0 00:08:54.995 tests 1 1 1 0 0 00:08:54.995 asserts 15 15 15 0 n/a 00:08:54.995 00:08:54.995 Elapsed time = 0.009 seconds 00:08:54.995 00:08:54.995 real 0m0.143s 00:08:54.995 user 0m0.014s 00:08:54.995 sys 0m0.028s 00:08:54.995 06:04:00 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.995 ************************************ 00:08:54.995 END TEST env_mem_callbacks 00:08:54.995 06:04:00 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:54.995 ************************************ 00:08:54.995 00:08:54.995 real 0m2.683s 00:08:54.995 user 0m1.391s 00:08:54.995 sys 0m0.932s 00:08:54.995 06:04:00 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.995 06:04:00 env -- common/autotest_common.sh@10 -- # set +x 00:08:54.995 ************************************ 00:08:54.995 END TEST env 00:08:54.995 ************************************ 00:08:55.254 06:04:00 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:55.254 06:04:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.254 06:04:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.254 06:04:00 -- common/autotest_common.sh@10 -- # set +x 00:08:55.254 ************************************ 00:08:55.254 START TEST rpc 00:08:55.254 ************************************ 00:08:55.254 06:04:00 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:55.254 * Looking for test storage... 00:08:55.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:55.254 06:04:00 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:55.254 06:04:00 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:55.254 06:04:00 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:55.254 06:04:00 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:55.254 06:04:00 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.254 06:04:00 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.254 06:04:00 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.254 06:04:00 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.254 06:04:00 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.254 06:04:00 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.254 06:04:00 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.254 06:04:00 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.254 06:04:00 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.254 06:04:00 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.254 06:04:00 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.254 06:04:00 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:55.254 06:04:00 rpc -- scripts/common.sh@345 -- # : 1 00:08:55.254 06:04:00 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.254 06:04:00 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.254 06:04:00 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:55.254 06:04:00 rpc -- scripts/common.sh@353 -- # local d=1 00:08:55.254 06:04:00 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.254 06:04:00 rpc -- scripts/common.sh@355 -- # echo 1 00:08:55.254 06:04:00 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.254 06:04:00 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:55.254 06:04:00 rpc -- scripts/common.sh@353 -- # local d=2 00:08:55.254 06:04:00 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.254 06:04:00 rpc -- scripts/common.sh@355 -- # echo 2 00:08:55.254 06:04:00 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.254 06:04:00 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.254 06:04:00 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.254 06:04:00 rpc -- scripts/common.sh@368 -- # return 0 00:08:55.254 06:04:00 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.254 06:04:00 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:55.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.254 --rc genhtml_branch_coverage=1 00:08:55.254 --rc genhtml_function_coverage=1 00:08:55.254 --rc genhtml_legend=1 00:08:55.254 --rc geninfo_all_blocks=1 00:08:55.254 --rc geninfo_unexecuted_blocks=1 00:08:55.254 00:08:55.254 ' 00:08:55.254 06:04:00 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:55.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.254 --rc genhtml_branch_coverage=1 00:08:55.254 --rc genhtml_function_coverage=1 00:08:55.254 --rc genhtml_legend=1 00:08:55.254 --rc geninfo_all_blocks=1 00:08:55.254 --rc geninfo_unexecuted_blocks=1 00:08:55.254 00:08:55.254 ' 00:08:55.254 06:04:00 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:55.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.254 --rc genhtml_branch_coverage=1 00:08:55.254 --rc genhtml_function_coverage=1 00:08:55.254 --rc genhtml_legend=1 00:08:55.254 --rc geninfo_all_blocks=1 00:08:55.254 --rc geninfo_unexecuted_blocks=1 00:08:55.254 00:08:55.254 ' 00:08:55.254 06:04:00 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:55.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.254 --rc genhtml_branch_coverage=1 00:08:55.254 --rc genhtml_function_coverage=1 00:08:55.254 --rc genhtml_legend=1 00:08:55.254 --rc geninfo_all_blocks=1 00:08:55.254 --rc geninfo_unexecuted_blocks=1 00:08:55.254 00:08:55.254 ' 00:08:55.254 06:04:00 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56948 00:08:55.254 06:04:00 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:55.254 06:04:00 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:55.254 06:04:00 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56948 00:08:55.254 06:04:00 rpc -- common/autotest_common.sh@835 -- # '[' -z 56948 ']' 00:08:55.254 06:04:00 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.254 06:04:00 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.254 06:04:00 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.254 06:04:00 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.254 06:04:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.512 [2024-11-27 06:04:00.402075] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:08:55.512 [2024-11-27 06:04:00.402213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56948 ] 00:08:55.512 [2024-11-27 06:04:00.547665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.771 [2024-11-27 06:04:00.612896] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:55.771 [2024-11-27 06:04:00.612960] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56948' to capture a snapshot of events at runtime. 00:08:55.771 [2024-11-27 06:04:00.612972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.771 [2024-11-27 06:04:00.612980] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.771 [2024-11-27 06:04:00.612988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56948 for offline analysis/debug. 00:08:55.771 [2024-11-27 06:04:00.613465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.771 [2024-11-27 06:04:00.691630] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:56.030 06:04:00 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.030 06:04:00 rpc -- common/autotest_common.sh@868 -- # return 0 00:08:56.030 06:04:00 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:56.030 06:04:00 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:56.030 06:04:00 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:56.030 06:04:00 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:56.030 06:04:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.030 06:04:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.030 06:04:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.030 ************************************ 00:08:56.030 START TEST rpc_integrity 00:08:56.030 ************************************ 00:08:56.030 06:04:00 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:56.030 06:04:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:56.030 06:04:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.030 06:04:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:56.030 06:04:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.030 06:04:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:56.030 06:04:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:56.030 06:04:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:56.030 06:04:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:56.030 06:04:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.030 06:04:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:56.030 06:04:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.030 06:04:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:56.030 06:04:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:56.030 06:04:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.030 06:04:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:56.030 06:04:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.030 06:04:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:56.030 { 00:08:56.030 "name": "Malloc0", 00:08:56.030 "aliases": [ 00:08:56.030 "7cfca815-3017-4d14-8f8c-873bbc89640c" 00:08:56.030 ], 00:08:56.030 "product_name": "Malloc disk", 00:08:56.030 "block_size": 512, 00:08:56.030 "num_blocks": 16384, 00:08:56.030 "uuid": "7cfca815-3017-4d14-8f8c-873bbc89640c", 00:08:56.030 "assigned_rate_limits": { 00:08:56.030 "rw_ios_per_sec": 0, 00:08:56.030 "rw_mbytes_per_sec": 0, 00:08:56.030 "r_mbytes_per_sec": 0, 00:08:56.030 "w_mbytes_per_sec": 0 00:08:56.030 }, 00:08:56.030 "claimed": false, 00:08:56.030 "zoned": false, 00:08:56.030 "supported_io_types": { 00:08:56.030 "read": true, 00:08:56.030 "write": true, 00:08:56.030 "unmap": true, 00:08:56.030 "flush": true, 00:08:56.030 "reset": true, 00:08:56.030 "nvme_admin": false, 00:08:56.030 "nvme_io": false, 00:08:56.030 "nvme_io_md": false, 00:08:56.030 "write_zeroes": true, 00:08:56.030 "zcopy": true, 00:08:56.030 "get_zone_info": false, 00:08:56.030 "zone_management": false, 00:08:56.030 "zone_append": false, 00:08:56.030 "compare": false, 00:08:56.030 "compare_and_write": false, 00:08:56.030 "abort": true, 00:08:56.030 "seek_hole": false, 00:08:56.030 "seek_data": false, 00:08:56.030 "copy": true, 00:08:56.030 "nvme_iov_md": false 00:08:56.030 }, 00:08:56.030 "memory_domains": [ 00:08:56.030 { 00:08:56.030 "dma_device_id": "system", 00:08:56.030 "dma_device_type": 1 00:08:56.030 }, 00:08:56.030 { 00:08:56.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.030 "dma_device_type": 2 00:08:56.030 } 00:08:56.030 ], 00:08:56.030 "driver_specific": {} 00:08:56.030 } 00:08:56.030 ]' 00:08:56.030 06:04:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:56.030 06:04:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:56.030 06:04:01 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:56.030 06:04:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.030 06:04:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:56.030 [2024-11-27 06:04:01.068584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:56.031 [2024-11-27 06:04:01.068661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.031 [2024-11-27 06:04:01.068684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5f1050 00:08:56.031 [2024-11-27 06:04:01.068694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.031 [2024-11-27 06:04:01.070510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.031 [2024-11-27 06:04:01.070547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:56.031 Passthru0 00:08:56.031 06:04:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.031 06:04:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:56.031 06:04:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.031 06:04:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:56.031 06:04:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.031 06:04:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:56.031 { 00:08:56.031 "name": "Malloc0", 00:08:56.031 "aliases": [ 00:08:56.031 "7cfca815-3017-4d14-8f8c-873bbc89640c" 00:08:56.031 ], 00:08:56.031 "product_name": "Malloc disk", 00:08:56.031 "block_size": 512, 00:08:56.031 "num_blocks": 16384, 00:08:56.031 "uuid": "7cfca815-3017-4d14-8f8c-873bbc89640c", 00:08:56.031 "assigned_rate_limits": { 00:08:56.031 "rw_ios_per_sec": 0, 00:08:56.031 "rw_mbytes_per_sec": 0, 00:08:56.031 "r_mbytes_per_sec": 0, 00:08:56.031 "w_mbytes_per_sec": 0 00:08:56.031 }, 00:08:56.031 "claimed": true, 00:08:56.031 "claim_type": "exclusive_write", 00:08:56.031 "zoned": false, 00:08:56.031 "supported_io_types": { 00:08:56.031 "read": true, 00:08:56.031 "write": true, 00:08:56.031 "unmap": true, 00:08:56.031 "flush": true, 00:08:56.031 "reset": true, 00:08:56.031 "nvme_admin": false, 00:08:56.031 "nvme_io": false, 00:08:56.031 "nvme_io_md": false, 00:08:56.031 "write_zeroes": true, 00:08:56.031 "zcopy": true, 00:08:56.031 "get_zone_info": false, 00:08:56.031 "zone_management": false, 00:08:56.031 "zone_append": false, 00:08:56.031 "compare": false, 00:08:56.031 "compare_and_write": false, 00:08:56.031 "abort": true, 00:08:56.031 "seek_hole": false, 00:08:56.031 "seek_data": false, 00:08:56.031 "copy": true, 00:08:56.031 "nvme_iov_md": false 00:08:56.031 }, 00:08:56.031 "memory_domains": [ 00:08:56.031 { 00:08:56.031 "dma_device_id": "system", 00:08:56.031 "dma_device_type": 1 00:08:56.031 }, 00:08:56.031 { 00:08:56.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.031 "dma_device_type": 2 00:08:56.031 } 00:08:56.031 ], 00:08:56.031 "driver_specific": {} 00:08:56.031 }, 00:08:56.031 { 00:08:56.031 "name": "Passthru0", 00:08:56.031 "aliases": [ 00:08:56.031 "0f373964-2294-58f8-80ef-b85be5ab6200" 00:08:56.031 ], 00:08:56.031 "product_name": "passthru", 00:08:56.031 "block_size": 512, 00:08:56.031 "num_blocks": 16384, 00:08:56.031 "uuid": "0f373964-2294-58f8-80ef-b85be5ab6200", 00:08:56.031 "assigned_rate_limits": { 00:08:56.031 "rw_ios_per_sec": 0, 00:08:56.031 "rw_mbytes_per_sec": 0, 00:08:56.031 "r_mbytes_per_sec": 0, 00:08:56.031 "w_mbytes_per_sec": 0 00:08:56.031 }, 00:08:56.031 "claimed": false, 00:08:56.031 "zoned": false, 00:08:56.031 "supported_io_types": { 00:08:56.031 "read": true, 00:08:56.031 "write": true, 00:08:56.031 "unmap": true, 00:08:56.031 "flush": true, 00:08:56.031 "reset": true, 00:08:56.031 "nvme_admin": false, 00:08:56.031 "nvme_io": false, 00:08:56.031 "nvme_io_md": false, 00:08:56.031 "write_zeroes": true, 00:08:56.031 "zcopy": true, 00:08:56.031 "get_zone_info": false, 00:08:56.031 "zone_management": false, 00:08:56.031 "zone_append": false, 00:08:56.031 "compare": false, 00:08:56.031 "compare_and_write": false, 00:08:56.031 "abort": true, 00:08:56.031 "seek_hole": false, 00:08:56.031 "seek_data": false, 00:08:56.031 "copy": true, 00:08:56.031 "nvme_iov_md": false 00:08:56.031 }, 00:08:56.031 "memory_domains": [ 00:08:56.031 { 00:08:56.031 "dma_device_id": "system", 00:08:56.031 "dma_device_type": 1 00:08:56.031 }, 00:08:56.031 { 00:08:56.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.031 "dma_device_type": 2 00:08:56.031 } 00:08:56.031 ], 00:08:56.031 "driver_specific": { 00:08:56.031 "passthru": { 00:08:56.031 "name": "Passthru0", 00:08:56.031 "base_bdev_name": "Malloc0" 00:08:56.031 } 00:08:56.031 } 00:08:56.031 } 00:08:56.031 ]' 00:08:56.031 06:04:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:56.290 06:04:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:56.290 06:04:01 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:56.290 06:04:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.290 06:04:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:56.290 06:04:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.290 06:04:01 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:56.290 06:04:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.290 06:04:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:56.290 06:04:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.290 06:04:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:56.290 06:04:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.290 06:04:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:56.290 06:04:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.290 06:04:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:56.290 06:04:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:56.290 06:04:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:56.290 00:08:56.290 real 0m0.333s 00:08:56.290 user 0m0.224s 00:08:56.290 sys 0m0.042s 00:08:56.290 06:04:01 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.290 ************************************ 00:08:56.290 END TEST rpc_integrity 00:08:56.290 ************************************ 00:08:56.290 06:04:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:56.290 06:04:01 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:56.290 06:04:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.290 06:04:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.290 06:04:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.290 ************************************ 00:08:56.290 START TEST rpc_plugins 00:08:56.290 ************************************ 00:08:56.290 06:04:01 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:08:56.290 06:04:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:56.290 06:04:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.290 06:04:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:56.290 06:04:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.290 06:04:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:56.290 06:04:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:56.290 06:04:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.290 06:04:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:56.290 06:04:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.290 06:04:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:56.290 { 00:08:56.290 "name": "Malloc1", 00:08:56.290 "aliases": [ 00:08:56.290 "6b0cda27-a7fb-4c6f-8b66-77e5ae6d7842" 00:08:56.290 ], 00:08:56.290 "product_name": "Malloc disk", 00:08:56.290 "block_size": 4096, 00:08:56.290 "num_blocks": 256, 00:08:56.290 "uuid": "6b0cda27-a7fb-4c6f-8b66-77e5ae6d7842", 00:08:56.290 "assigned_rate_limits": { 00:08:56.290 "rw_ios_per_sec": 0, 00:08:56.290 "rw_mbytes_per_sec": 0, 00:08:56.290 "r_mbytes_per_sec": 0, 00:08:56.290 "w_mbytes_per_sec": 0 00:08:56.291 }, 00:08:56.291 "claimed": false, 00:08:56.291 "zoned": false, 00:08:56.291 "supported_io_types": { 00:08:56.291 "read": true, 00:08:56.291 "write": true, 00:08:56.291 "unmap": true, 00:08:56.291 "flush": true, 00:08:56.291 "reset": true, 00:08:56.291 "nvme_admin": false, 00:08:56.291 "nvme_io": false, 00:08:56.291 "nvme_io_md": false, 00:08:56.291 "write_zeroes": true, 00:08:56.291 "zcopy": true, 00:08:56.291 "get_zone_info": false, 00:08:56.291 "zone_management": false, 00:08:56.291 "zone_append": false, 00:08:56.291 "compare": false, 00:08:56.291 "compare_and_write": false, 00:08:56.291 "abort": true, 00:08:56.291 "seek_hole": false, 00:08:56.291 "seek_data": false, 00:08:56.291 "copy": true, 00:08:56.291 "nvme_iov_md": false 00:08:56.291 }, 00:08:56.291 "memory_domains": [ 00:08:56.291 { 00:08:56.291 "dma_device_id": "system", 00:08:56.291 "dma_device_type": 1 00:08:56.291 }, 00:08:56.291 { 00:08:56.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.291 "dma_device_type": 2 00:08:56.291 } 00:08:56.291 ], 00:08:56.291 "driver_specific": {} 00:08:56.291 } 00:08:56.291 ]' 00:08:56.291 06:04:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:56.291 06:04:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:56.291 06:04:01 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:56.291 06:04:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.291 06:04:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:56.550 06:04:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.550 06:04:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:56.550 06:04:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.550 06:04:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:56.550 06:04:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.550 06:04:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:56.550 06:04:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:56.550 06:04:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:56.550 00:08:56.550 real 0m0.156s 00:08:56.550 user 0m0.104s 00:08:56.550 sys 0m0.015s 00:08:56.550 06:04:01 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.550 06:04:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:56.550 ************************************ 00:08:56.550 END TEST rpc_plugins 00:08:56.550 ************************************ 00:08:56.550 06:04:01 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:56.550 06:04:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.550 06:04:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.550 06:04:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.550 ************************************ 00:08:56.550 START TEST rpc_trace_cmd_test 00:08:56.550 ************************************ 00:08:56.550 06:04:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:08:56.550 06:04:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:56.550 06:04:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:56.550 06:04:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.550 06:04:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.550 06:04:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.550 06:04:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:56.550 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56948", 00:08:56.550 "tpoint_group_mask": "0x8", 00:08:56.550 "iscsi_conn": { 00:08:56.550 "mask": "0x2", 00:08:56.550 "tpoint_mask": "0x0" 00:08:56.550 }, 00:08:56.550 "scsi": { 00:08:56.550 "mask": "0x4", 00:08:56.550 "tpoint_mask": "0x0" 00:08:56.550 }, 00:08:56.550 "bdev": { 00:08:56.550 "mask": "0x8", 00:08:56.550 "tpoint_mask": "0xffffffffffffffff" 00:08:56.550 }, 00:08:56.550 "nvmf_rdma": { 00:08:56.550 "mask": "0x10", 00:08:56.550 "tpoint_mask": "0x0" 00:08:56.550 }, 00:08:56.550 "nvmf_tcp": { 00:08:56.550 "mask": "0x20", 00:08:56.550 "tpoint_mask": "0x0" 00:08:56.550 }, 00:08:56.550 "ftl": { 00:08:56.550 "mask": "0x40", 00:08:56.550 "tpoint_mask": "0x0" 00:08:56.550 }, 00:08:56.550 "blobfs": { 00:08:56.550 "mask": "0x80", 00:08:56.550 "tpoint_mask": "0x0" 00:08:56.550 }, 00:08:56.550 "dsa": { 00:08:56.550 "mask": "0x200", 00:08:56.550 "tpoint_mask": "0x0" 00:08:56.550 }, 00:08:56.550 "thread": { 00:08:56.550 "mask": "0x400", 00:08:56.550 "tpoint_mask": "0x0" 00:08:56.550 }, 00:08:56.550 "nvme_pcie": { 00:08:56.550 "mask": "0x800", 00:08:56.550 "tpoint_mask": "0x0" 00:08:56.550 }, 00:08:56.550 "iaa": { 00:08:56.550 "mask": "0x1000", 00:08:56.550 "tpoint_mask": "0x0" 00:08:56.550 }, 00:08:56.550 "nvme_tcp": { 00:08:56.550 "mask": "0x2000", 00:08:56.550 "tpoint_mask": "0x0" 00:08:56.550 }, 00:08:56.550 "bdev_nvme": { 00:08:56.550 "mask": "0x4000", 00:08:56.550 "tpoint_mask": "0x0" 00:08:56.550 }, 00:08:56.550 "sock": { 00:08:56.550 "mask": "0x8000", 00:08:56.550 "tpoint_mask": "0x0" 00:08:56.550 }, 00:08:56.550 "blob": { 00:08:56.550 "mask": "0x10000", 00:08:56.550 "tpoint_mask": "0x0" 00:08:56.550 }, 00:08:56.550 "bdev_raid": { 00:08:56.550 "mask": "0x20000", 00:08:56.550 "tpoint_mask": "0x0" 00:08:56.550 }, 00:08:56.550 "scheduler": { 00:08:56.550 "mask": "0x40000", 00:08:56.550 "tpoint_mask": "0x0" 00:08:56.550 } 00:08:56.550 }' 00:08:56.550 06:04:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:56.550 06:04:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:56.550 06:04:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:56.550 06:04:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:56.550 06:04:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:56.811 06:04:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:56.811 06:04:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:56.811 06:04:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:56.811 06:04:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:56.811 ************************************ 00:08:56.811 END TEST rpc_trace_cmd_test 00:08:56.811 ************************************ 00:08:56.811 06:04:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:56.811 00:08:56.811 real 0m0.297s 00:08:56.811 user 0m0.260s 00:08:56.811 sys 0m0.029s 00:08:56.811 06:04:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.811 06:04:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.811 06:04:01 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:56.811 06:04:01 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:56.811 06:04:01 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:56.811 06:04:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.811 06:04:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.811 06:04:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.811 ************************************ 00:08:56.811 START TEST rpc_daemon_integrity 00:08:56.811 ************************************ 00:08:56.811 06:04:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:56.811 06:04:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:56.811 06:04:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.811 06:04:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:56.811 06:04:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.811 06:04:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:56.811 06:04:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:57.070 06:04:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:57.070 06:04:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:57.070 06:04:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.070 06:04:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:57.070 06:04:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.070 06:04:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:57.070 06:04:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:57.070 06:04:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.070 06:04:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:57.070 06:04:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.070 06:04:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:57.070 { 00:08:57.070 "name": "Malloc2", 00:08:57.070 "aliases": [ 00:08:57.070 "b130f13b-25d4-42d3-bb4f-0e035a49f890" 00:08:57.070 ], 00:08:57.070 "product_name": "Malloc disk", 00:08:57.070 "block_size": 512, 00:08:57.070 "num_blocks": 16384, 00:08:57.070 "uuid": "b130f13b-25d4-42d3-bb4f-0e035a49f890", 00:08:57.070 "assigned_rate_limits": { 00:08:57.070 "rw_ios_per_sec": 0, 00:08:57.070 "rw_mbytes_per_sec": 0, 00:08:57.070 "r_mbytes_per_sec": 0, 00:08:57.070 "w_mbytes_per_sec": 0 00:08:57.070 }, 00:08:57.070 "claimed": false, 00:08:57.070 "zoned": false, 00:08:57.070 "supported_io_types": { 00:08:57.070 "read": true, 00:08:57.070 "write": true, 00:08:57.070 "unmap": true, 00:08:57.070 "flush": true, 00:08:57.070 "reset": true, 00:08:57.070 "nvme_admin": false, 00:08:57.070 "nvme_io": false, 00:08:57.070 "nvme_io_md": false, 00:08:57.070 "write_zeroes": true, 00:08:57.070 "zcopy": true, 00:08:57.070 "get_zone_info": false, 00:08:57.070 "zone_management": false, 00:08:57.070 "zone_append": false, 00:08:57.070 "compare": false, 00:08:57.070 "compare_and_write": false, 00:08:57.070 "abort": true, 00:08:57.070 "seek_hole": false, 00:08:57.070 "seek_data": false, 00:08:57.070 "copy": true, 00:08:57.070 "nvme_iov_md": false 00:08:57.070 }, 00:08:57.070 "memory_domains": [ 00:08:57.070 { 00:08:57.070 "dma_device_id": "system", 00:08:57.070 "dma_device_type": 1 00:08:57.070 }, 00:08:57.070 { 00:08:57.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.070 "dma_device_type": 2 00:08:57.070 } 00:08:57.070 ], 00:08:57.070 "driver_specific": {} 00:08:57.070 } 00:08:57.070 ]' 00:08:57.070 06:04:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:57.070 06:04:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:57.070 06:04:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:57.070 06:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.070 06:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:57.070 [2024-11-27 06:04:02.009375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:57.070 [2024-11-27 06:04:02.009433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.070 [2024-11-27 06:04:02.009454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5fc030 00:08:57.070 [2024-11-27 06:04:02.009464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.071 [2024-11-27 06:04:02.011186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.071 [2024-11-27 06:04:02.011360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:57.071 Passthru0 00:08:57.071 06:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.071 06:04:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:57.071 06:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.071 06:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:57.071 06:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.071 06:04:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:57.071 { 00:08:57.071 "name": "Malloc2", 00:08:57.071 "aliases": [ 00:08:57.071 "b130f13b-25d4-42d3-bb4f-0e035a49f890" 00:08:57.071 ], 00:08:57.071 "product_name": "Malloc disk", 00:08:57.071 "block_size": 512, 00:08:57.071 "num_blocks": 16384, 00:08:57.071 "uuid": "b130f13b-25d4-42d3-bb4f-0e035a49f890", 00:08:57.071 "assigned_rate_limits": { 00:08:57.071 "rw_ios_per_sec": 0, 00:08:57.071 "rw_mbytes_per_sec": 0, 00:08:57.071 "r_mbytes_per_sec": 0, 00:08:57.071 "w_mbytes_per_sec": 0 00:08:57.071 }, 00:08:57.071 "claimed": true, 00:08:57.071 "claim_type": "exclusive_write", 00:08:57.071 "zoned": false, 00:08:57.071 "supported_io_types": { 00:08:57.071 "read": true, 00:08:57.071 "write": true, 00:08:57.071 "unmap": true, 00:08:57.071 "flush": true, 00:08:57.071 "reset": true, 00:08:57.071 "nvme_admin": false, 00:08:57.071 "nvme_io": false, 00:08:57.071 "nvme_io_md": false, 00:08:57.071 "write_zeroes": true, 00:08:57.071 "zcopy": true, 00:08:57.071 "get_zone_info": false, 00:08:57.071 "zone_management": false, 00:08:57.071 "zone_append": false, 00:08:57.071 "compare": false, 00:08:57.071 "compare_and_write": false, 00:08:57.071 "abort": true, 00:08:57.071 "seek_hole": false, 00:08:57.071 "seek_data": false, 00:08:57.071 "copy": true, 00:08:57.071 "nvme_iov_md": false 00:08:57.071 }, 00:08:57.071 "memory_domains": [ 00:08:57.071 { 00:08:57.071 "dma_device_id": "system", 00:08:57.071 "dma_device_type": 1 00:08:57.071 }, 00:08:57.071 { 00:08:57.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.071 "dma_device_type": 2 00:08:57.071 } 00:08:57.071 ], 00:08:57.071 "driver_specific": {} 00:08:57.071 }, 00:08:57.071 { 00:08:57.071 "name": "Passthru0", 00:08:57.071 "aliases": [ 00:08:57.071 "ce92e84d-ee01-530e-9c8d-61a3c6a05cb2" 00:08:57.071 ], 00:08:57.071 "product_name": "passthru", 00:08:57.071 "block_size": 512, 00:08:57.071 "num_blocks": 16384, 00:08:57.071 "uuid": "ce92e84d-ee01-530e-9c8d-61a3c6a05cb2", 00:08:57.071 "assigned_rate_limits": { 00:08:57.071 "rw_ios_per_sec": 0, 00:08:57.071 "rw_mbytes_per_sec": 0, 00:08:57.071 "r_mbytes_per_sec": 0, 00:08:57.071 "w_mbytes_per_sec": 0 00:08:57.071 }, 00:08:57.071 "claimed": false, 00:08:57.071 "zoned": false, 00:08:57.071 "supported_io_types": { 00:08:57.071 "read": true, 00:08:57.071 "write": true, 00:08:57.071 "unmap": true, 00:08:57.071 "flush": true, 00:08:57.071 "reset": true, 00:08:57.071 "nvme_admin": false, 00:08:57.071 "nvme_io": false, 00:08:57.071 "nvme_io_md": false, 00:08:57.071 "write_zeroes": true, 00:08:57.071 "zcopy": true, 00:08:57.071 "get_zone_info": false, 00:08:57.071 "zone_management": false, 00:08:57.071 "zone_append": false, 00:08:57.071 "compare": false, 00:08:57.071 "compare_and_write": false, 00:08:57.071 "abort": true, 00:08:57.071 "seek_hole": false, 00:08:57.071 "seek_data": false, 00:08:57.071 "copy": true, 00:08:57.071 "nvme_iov_md": false 00:08:57.071 }, 00:08:57.071 "memory_domains": [ 00:08:57.071 { 00:08:57.071 "dma_device_id": "system", 00:08:57.071 "dma_device_type": 1 00:08:57.071 }, 00:08:57.071 { 00:08:57.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.071 "dma_device_type": 2 00:08:57.071 } 00:08:57.071 ], 00:08:57.071 "driver_specific": { 00:08:57.071 "passthru": { 00:08:57.071 "name": "Passthru0", 00:08:57.071 "base_bdev_name": "Malloc2" 00:08:57.071 } 00:08:57.071 } 00:08:57.071 } 00:08:57.071 ]' 00:08:57.071 06:04:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:57.071 06:04:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:57.071 06:04:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:57.071 06:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.071 06:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:57.071 06:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.071 06:04:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:57.071 06:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.071 06:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:57.071 06:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.071 06:04:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:57.071 06:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.071 06:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:57.071 06:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.071 06:04:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:57.071 06:04:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:57.330 ************************************ 00:08:57.330 END TEST rpc_daemon_integrity 00:08:57.330 ************************************ 00:08:57.330 06:04:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:57.330 00:08:57.330 real 0m0.321s 00:08:57.330 user 0m0.226s 00:08:57.330 sys 0m0.026s 00:08:57.330 06:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.330 06:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:57.330 06:04:02 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:57.330 06:04:02 rpc -- rpc/rpc.sh@84 -- # killprocess 56948 00:08:57.330 06:04:02 rpc -- common/autotest_common.sh@954 -- # '[' -z 56948 ']' 00:08:57.330 06:04:02 rpc -- common/autotest_common.sh@958 -- # kill -0 56948 00:08:57.330 06:04:02 rpc -- common/autotest_common.sh@959 -- # uname 00:08:57.330 06:04:02 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:57.330 06:04:02 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56948 00:08:57.330 killing process with pid 56948 00:08:57.330 06:04:02 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:57.330 06:04:02 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:57.330 06:04:02 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56948' 00:08:57.330 06:04:02 rpc -- common/autotest_common.sh@973 -- # kill 56948 00:08:57.330 06:04:02 rpc -- common/autotest_common.sh@978 -- # wait 56948 00:08:57.588 00:08:57.588 real 0m2.558s 00:08:57.588 user 0m3.224s 00:08:57.588 sys 0m0.690s 00:08:57.588 ************************************ 00:08:57.588 END TEST rpc 00:08:57.588 ************************************ 00:08:57.588 06:04:02 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.588 06:04:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.846 06:04:02 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:57.846 06:04:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.846 06:04:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.846 06:04:02 -- common/autotest_common.sh@10 -- # set +x 00:08:57.846 ************************************ 00:08:57.846 START TEST skip_rpc 00:08:57.846 ************************************ 00:08:57.846 06:04:02 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:57.846 * Looking for test storage... 00:08:57.846 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:57.846 06:04:02 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:57.846 06:04:02 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:57.846 06:04:02 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:58.104 06:04:02 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.104 06:04:02 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:58.104 06:04:02 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.104 06:04:02 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:58.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.104 --rc genhtml_branch_coverage=1 00:08:58.104 --rc genhtml_function_coverage=1 00:08:58.104 --rc genhtml_legend=1 00:08:58.104 --rc geninfo_all_blocks=1 00:08:58.104 --rc geninfo_unexecuted_blocks=1 00:08:58.104 00:08:58.104 ' 00:08:58.104 06:04:02 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:58.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.104 --rc genhtml_branch_coverage=1 00:08:58.104 --rc genhtml_function_coverage=1 00:08:58.104 --rc genhtml_legend=1 00:08:58.104 --rc geninfo_all_blocks=1 00:08:58.104 --rc geninfo_unexecuted_blocks=1 00:08:58.104 00:08:58.104 ' 00:08:58.104 06:04:02 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:58.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.104 --rc genhtml_branch_coverage=1 00:08:58.104 --rc genhtml_function_coverage=1 00:08:58.104 --rc genhtml_legend=1 00:08:58.104 --rc geninfo_all_blocks=1 00:08:58.104 --rc geninfo_unexecuted_blocks=1 00:08:58.104 00:08:58.104 ' 00:08:58.104 06:04:02 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:58.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.104 --rc genhtml_branch_coverage=1 00:08:58.104 --rc genhtml_function_coverage=1 00:08:58.104 --rc genhtml_legend=1 00:08:58.104 --rc geninfo_all_blocks=1 00:08:58.104 --rc geninfo_unexecuted_blocks=1 00:08:58.104 00:08:58.104 ' 00:08:58.104 06:04:02 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:58.104 06:04:02 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:58.104 06:04:02 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:58.104 06:04:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.104 06:04:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.104 06:04:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.105 ************************************ 00:08:58.105 START TEST skip_rpc 00:08:58.105 ************************************ 00:08:58.105 06:04:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:08:58.105 06:04:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57152 00:08:58.105 06:04:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:58.105 06:04:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:58.105 06:04:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:58.105 [2024-11-27 06:04:03.041928] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:08:58.105 [2024-11-27 06:04:03.042451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57152 ] 00:08:58.105 [2024-11-27 06:04:03.198249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.362 [2024-11-27 06:04:03.279451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.362 [2024-11-27 06:04:03.365451] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:03.627 06:04:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:03.627 06:04:07 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:03.627 06:04:07 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:03.627 06:04:07 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:03.627 06:04:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.627 06:04:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:03.627 06:04:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.627 06:04:07 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:09:03.627 06:04:07 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.627 06:04:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.627 06:04:07 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:03.627 06:04:07 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:03.627 06:04:07 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:03.627 06:04:07 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:03.627 06:04:07 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:03.627 06:04:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:03.627 06:04:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57152 00:09:03.627 06:04:07 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57152 ']' 00:09:03.627 06:04:07 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57152 00:09:03.627 06:04:07 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:09:03.627 06:04:07 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.627 06:04:07 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57152 00:09:03.627 killing process with pid 57152 00:09:03.627 06:04:08 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.627 06:04:08 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.627 06:04:08 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57152' 00:09:03.627 06:04:08 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57152 00:09:03.627 06:04:08 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57152 00:09:03.627 00:09:03.627 real 0m5.457s 00:09:03.627 user 0m5.025s 00:09:03.627 sys 0m0.332s 00:09:03.627 06:04:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.627 ************************************ 00:09:03.627 END TEST skip_rpc 00:09:03.627 ************************************ 00:09:03.627 06:04:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.627 06:04:08 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:03.627 06:04:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:03.627 06:04:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.627 06:04:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.627 ************************************ 00:09:03.627 START TEST skip_rpc_with_json 00:09:03.627 ************************************ 00:09:03.627 06:04:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:09:03.627 06:04:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:03.627 06:04:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57233 00:09:03.627 06:04:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:03.627 06:04:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:03.627 06:04:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57233 00:09:03.627 06:04:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57233 ']' 00:09:03.627 06:04:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.627 06:04:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.627 06:04:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.627 06:04:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.627 06:04:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:03.627 [2024-11-27 06:04:08.553704] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:09:03.627 [2024-11-27 06:04:08.553821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57233 ] 00:09:03.627 [2024-11-27 06:04:08.702150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.885 [2024-11-27 06:04:08.763255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.885 [2024-11-27 06:04:08.836879] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:04.144 06:04:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.144 06:04:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:09:04.144 06:04:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:04.144 06:04:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.144 06:04:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:04.144 [2024-11-27 06:04:09.050633] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:04.144 request: 00:09:04.144 { 00:09:04.144 "trtype": "tcp", 00:09:04.144 "method": "nvmf_get_transports", 00:09:04.144 "req_id": 1 00:09:04.144 } 00:09:04.144 Got JSON-RPC error response 00:09:04.144 response: 00:09:04.144 { 00:09:04.144 "code": -19, 00:09:04.144 "message": "No such device" 00:09:04.144 } 00:09:04.144 06:04:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:04.144 06:04:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:04.144 06:04:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.144 06:04:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:04.144 [2024-11-27 06:04:09.062753] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:04.144 06:04:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.144 06:04:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:04.144 06:04:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.144 06:04:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:04.144 06:04:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.402 06:04:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:04.402 { 00:09:04.402 "subsystems": [ 00:09:04.402 { 00:09:04.402 "subsystem": "fsdev", 00:09:04.402 "config": [ 00:09:04.402 { 00:09:04.402 "method": "fsdev_set_opts", 00:09:04.402 "params": { 00:09:04.402 "fsdev_io_pool_size": 65535, 00:09:04.402 "fsdev_io_cache_size": 256 00:09:04.402 } 00:09:04.402 } 00:09:04.402 ] 00:09:04.402 }, 00:09:04.402 { 00:09:04.402 "subsystem": "keyring", 00:09:04.402 "config": [] 00:09:04.402 }, 00:09:04.402 { 00:09:04.402 "subsystem": "iobuf", 00:09:04.402 "config": [ 00:09:04.402 { 00:09:04.402 "method": "iobuf_set_options", 00:09:04.402 "params": { 00:09:04.402 "small_pool_count": 8192, 00:09:04.402 "large_pool_count": 1024, 00:09:04.402 "small_bufsize": 8192, 00:09:04.402 "large_bufsize": 135168, 00:09:04.402 "enable_numa": false 00:09:04.402 } 00:09:04.402 } 00:09:04.402 ] 00:09:04.402 }, 00:09:04.402 { 00:09:04.402 "subsystem": "sock", 00:09:04.402 "config": [ 00:09:04.402 { 00:09:04.402 "method": "sock_set_default_impl", 00:09:04.402 "params": { 00:09:04.402 "impl_name": "uring" 00:09:04.402 } 00:09:04.402 }, 00:09:04.402 { 00:09:04.402 "method": "sock_impl_set_options", 00:09:04.402 "params": { 00:09:04.402 "impl_name": "ssl", 00:09:04.402 "recv_buf_size": 4096, 00:09:04.402 "send_buf_size": 4096, 00:09:04.402 "enable_recv_pipe": true, 00:09:04.402 "enable_quickack": false, 00:09:04.402 "enable_placement_id": 0, 00:09:04.402 "enable_zerocopy_send_server": true, 00:09:04.402 "enable_zerocopy_send_client": false, 00:09:04.402 "zerocopy_threshold": 0, 00:09:04.402 "tls_version": 0, 00:09:04.402 "enable_ktls": false 00:09:04.402 } 00:09:04.402 }, 00:09:04.402 { 00:09:04.402 "method": "sock_impl_set_options", 00:09:04.402 "params": { 00:09:04.402 "impl_name": "posix", 00:09:04.402 "recv_buf_size": 2097152, 00:09:04.402 "send_buf_size": 2097152, 00:09:04.402 "enable_recv_pipe": true, 00:09:04.402 "enable_quickack": false, 00:09:04.402 "enable_placement_id": 0, 00:09:04.402 "enable_zerocopy_send_server": true, 00:09:04.402 "enable_zerocopy_send_client": false, 00:09:04.402 "zerocopy_threshold": 0, 00:09:04.402 "tls_version": 0, 00:09:04.402 "enable_ktls": false 00:09:04.402 } 00:09:04.402 }, 00:09:04.402 { 00:09:04.402 "method": "sock_impl_set_options", 00:09:04.402 "params": { 00:09:04.402 "impl_name": "uring", 00:09:04.402 "recv_buf_size": 2097152, 00:09:04.402 "send_buf_size": 2097152, 00:09:04.402 "enable_recv_pipe": true, 00:09:04.402 "enable_quickack": false, 00:09:04.402 "enable_placement_id": 0, 00:09:04.402 "enable_zerocopy_send_server": false, 00:09:04.402 "enable_zerocopy_send_client": false, 00:09:04.402 "zerocopy_threshold": 0, 00:09:04.402 "tls_version": 0, 00:09:04.402 "enable_ktls": false 00:09:04.402 } 00:09:04.402 } 00:09:04.402 ] 00:09:04.402 }, 00:09:04.402 { 00:09:04.402 "subsystem": "vmd", 00:09:04.402 "config": [] 00:09:04.402 }, 00:09:04.402 { 00:09:04.402 "subsystem": "accel", 00:09:04.402 "config": [ 00:09:04.402 { 00:09:04.402 "method": "accel_set_options", 00:09:04.402 "params": { 00:09:04.402 "small_cache_size": 128, 00:09:04.402 "large_cache_size": 16, 00:09:04.402 "task_count": 2048, 00:09:04.402 "sequence_count": 2048, 00:09:04.402 "buf_count": 2048 00:09:04.402 } 00:09:04.402 } 00:09:04.402 ] 00:09:04.402 }, 00:09:04.402 { 00:09:04.402 "subsystem": "bdev", 00:09:04.402 "config": [ 00:09:04.402 { 00:09:04.402 "method": "bdev_set_options", 00:09:04.402 "params": { 00:09:04.402 "bdev_io_pool_size": 65535, 00:09:04.402 "bdev_io_cache_size": 256, 00:09:04.402 "bdev_auto_examine": true, 00:09:04.402 "iobuf_small_cache_size": 128, 00:09:04.402 "iobuf_large_cache_size": 16 00:09:04.402 } 00:09:04.402 }, 00:09:04.402 { 00:09:04.402 "method": "bdev_raid_set_options", 00:09:04.402 "params": { 00:09:04.402 "process_window_size_kb": 1024, 00:09:04.402 "process_max_bandwidth_mb_sec": 0 00:09:04.402 } 00:09:04.402 }, 00:09:04.402 { 00:09:04.402 "method": "bdev_iscsi_set_options", 00:09:04.402 "params": { 00:09:04.402 "timeout_sec": 30 00:09:04.402 } 00:09:04.402 }, 00:09:04.402 { 00:09:04.402 "method": "bdev_nvme_set_options", 00:09:04.402 "params": { 00:09:04.402 "action_on_timeout": "none", 00:09:04.402 "timeout_us": 0, 00:09:04.402 "timeout_admin_us": 0, 00:09:04.402 "keep_alive_timeout_ms": 10000, 00:09:04.402 "arbitration_burst": 0, 00:09:04.402 "low_priority_weight": 0, 00:09:04.402 "medium_priority_weight": 0, 00:09:04.402 "high_priority_weight": 0, 00:09:04.402 "nvme_adminq_poll_period_us": 10000, 00:09:04.402 "nvme_ioq_poll_period_us": 0, 00:09:04.402 "io_queue_requests": 0, 00:09:04.402 "delay_cmd_submit": true, 00:09:04.402 "transport_retry_count": 4, 00:09:04.402 "bdev_retry_count": 3, 00:09:04.402 "transport_ack_timeout": 0, 00:09:04.402 "ctrlr_loss_timeout_sec": 0, 00:09:04.402 "reconnect_delay_sec": 0, 00:09:04.402 "fast_io_fail_timeout_sec": 0, 00:09:04.402 "disable_auto_failback": false, 00:09:04.402 "generate_uuids": false, 00:09:04.402 "transport_tos": 0, 00:09:04.402 "nvme_error_stat": false, 00:09:04.402 "rdma_srq_size": 0, 00:09:04.402 "io_path_stat": false, 00:09:04.402 "allow_accel_sequence": false, 00:09:04.403 "rdma_max_cq_size": 0, 00:09:04.403 "rdma_cm_event_timeout_ms": 0, 00:09:04.403 "dhchap_digests": [ 00:09:04.403 "sha256", 00:09:04.403 "sha384", 00:09:04.403 "sha512" 00:09:04.403 ], 00:09:04.403 "dhchap_dhgroups": [ 00:09:04.403 "null", 00:09:04.403 "ffdhe2048", 00:09:04.403 "ffdhe3072", 00:09:04.403 "ffdhe4096", 00:09:04.403 "ffdhe6144", 00:09:04.403 "ffdhe8192" 00:09:04.403 ] 00:09:04.403 } 00:09:04.403 }, 00:09:04.403 { 00:09:04.403 "method": "bdev_nvme_set_hotplug", 00:09:04.403 "params": { 00:09:04.403 "period_us": 100000, 00:09:04.403 "enable": false 00:09:04.403 } 00:09:04.403 }, 00:09:04.403 { 00:09:04.403 "method": "bdev_wait_for_examine" 00:09:04.403 } 00:09:04.403 ] 00:09:04.403 }, 00:09:04.403 { 00:09:04.403 "subsystem": "scsi", 00:09:04.403 "config": null 00:09:04.403 }, 00:09:04.403 { 00:09:04.403 "subsystem": "scheduler", 00:09:04.403 "config": [ 00:09:04.403 { 00:09:04.403 "method": "framework_set_scheduler", 00:09:04.403 "params": { 00:09:04.403 "name": "static" 00:09:04.403 } 00:09:04.403 } 00:09:04.403 ] 00:09:04.403 }, 00:09:04.403 { 00:09:04.403 "subsystem": "vhost_scsi", 00:09:04.403 "config": [] 00:09:04.403 }, 00:09:04.403 { 00:09:04.403 "subsystem": "vhost_blk", 00:09:04.403 "config": [] 00:09:04.403 }, 00:09:04.403 { 00:09:04.403 "subsystem": "ublk", 00:09:04.403 "config": [] 00:09:04.403 }, 00:09:04.403 { 00:09:04.403 "subsystem": "nbd", 00:09:04.403 "config": [] 00:09:04.403 }, 00:09:04.403 { 00:09:04.403 "subsystem": "nvmf", 00:09:04.403 "config": [ 00:09:04.403 { 00:09:04.403 "method": "nvmf_set_config", 00:09:04.403 "params": { 00:09:04.403 "discovery_filter": "match_any", 00:09:04.403 "admin_cmd_passthru": { 00:09:04.403 "identify_ctrlr": false 00:09:04.403 }, 00:09:04.403 "dhchap_digests": [ 00:09:04.403 "sha256", 00:09:04.403 "sha384", 00:09:04.403 "sha512" 00:09:04.403 ], 00:09:04.403 "dhchap_dhgroups": [ 00:09:04.403 "null", 00:09:04.403 "ffdhe2048", 00:09:04.403 "ffdhe3072", 00:09:04.403 "ffdhe4096", 00:09:04.403 "ffdhe6144", 00:09:04.403 "ffdhe8192" 00:09:04.403 ] 00:09:04.403 } 00:09:04.403 }, 00:09:04.403 { 00:09:04.403 "method": "nvmf_set_max_subsystems", 00:09:04.403 "params": { 00:09:04.403 "max_subsystems": 1024 00:09:04.403 } 00:09:04.403 }, 00:09:04.403 { 00:09:04.403 "method": "nvmf_set_crdt", 00:09:04.403 "params": { 00:09:04.403 "crdt1": 0, 00:09:04.403 "crdt2": 0, 00:09:04.403 "crdt3": 0 00:09:04.403 } 00:09:04.403 }, 00:09:04.403 { 00:09:04.403 "method": "nvmf_create_transport", 00:09:04.403 "params": { 00:09:04.403 "trtype": "TCP", 00:09:04.403 "max_queue_depth": 128, 00:09:04.403 "max_io_qpairs_per_ctrlr": 127, 00:09:04.403 "in_capsule_data_size": 4096, 00:09:04.403 "max_io_size": 131072, 00:09:04.403 "io_unit_size": 131072, 00:09:04.403 "max_aq_depth": 128, 00:09:04.403 "num_shared_buffers": 511, 00:09:04.403 "buf_cache_size": 4294967295, 00:09:04.403 "dif_insert_or_strip": false, 00:09:04.403 "zcopy": false, 00:09:04.403 "c2h_success": true, 00:09:04.403 "sock_priority": 0, 00:09:04.403 "abort_timeout_sec": 1, 00:09:04.403 "ack_timeout": 0, 00:09:04.403 "data_wr_pool_size": 0 00:09:04.403 } 00:09:04.403 } 00:09:04.403 ] 00:09:04.403 }, 00:09:04.403 { 00:09:04.403 "subsystem": "iscsi", 00:09:04.403 "config": [ 00:09:04.403 { 00:09:04.403 "method": "iscsi_set_options", 00:09:04.403 "params": { 00:09:04.403 "node_base": "iqn.2016-06.io.spdk", 00:09:04.403 "max_sessions": 128, 00:09:04.403 "max_connections_per_session": 2, 00:09:04.403 "max_queue_depth": 64, 00:09:04.403 "default_time2wait": 2, 00:09:04.403 "default_time2retain": 20, 00:09:04.403 "first_burst_length": 8192, 00:09:04.403 "immediate_data": true, 00:09:04.403 "allow_duplicated_isid": false, 00:09:04.403 "error_recovery_level": 0, 00:09:04.403 "nop_timeout": 60, 00:09:04.403 "nop_in_interval": 30, 00:09:04.403 "disable_chap": false, 00:09:04.403 "require_chap": false, 00:09:04.403 "mutual_chap": false, 00:09:04.403 "chap_group": 0, 00:09:04.403 "max_large_datain_per_connection": 64, 00:09:04.403 "max_r2t_per_connection": 4, 00:09:04.403 "pdu_pool_size": 36864, 00:09:04.403 "immediate_data_pool_size": 16384, 00:09:04.403 "data_out_pool_size": 2048 00:09:04.403 } 00:09:04.403 } 00:09:04.403 ] 00:09:04.403 } 00:09:04.403 ] 00:09:04.403 } 00:09:04.403 06:04:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:04.403 06:04:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57233 00:09:04.403 06:04:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57233 ']' 00:09:04.403 06:04:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57233 00:09:04.403 06:04:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:04.403 06:04:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.403 06:04:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57233 00:09:04.403 killing process with pid 57233 00:09:04.403 06:04:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:04.403 06:04:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:04.403 06:04:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57233' 00:09:04.403 06:04:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57233 00:09:04.403 06:04:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57233 00:09:04.661 06:04:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57259 00:09:04.661 06:04:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:04.661 06:04:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:09.931 06:04:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57259 00:09:09.931 06:04:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57259 ']' 00:09:09.931 06:04:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57259 00:09:09.931 06:04:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:09.931 06:04:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.931 06:04:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57259 00:09:09.931 killing process with pid 57259 00:09:09.931 06:04:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:09.931 06:04:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:09.931 06:04:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57259' 00:09:09.931 06:04:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57259 00:09:09.931 06:04:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57259 00:09:10.190 06:04:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:10.191 06:04:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:10.191 ************************************ 00:09:10.191 END TEST skip_rpc_with_json 00:09:10.191 ************************************ 00:09:10.191 00:09:10.191 real 0m6.655s 00:09:10.191 user 0m6.171s 00:09:10.191 sys 0m0.692s 00:09:10.191 06:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.191 06:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:10.191 06:04:15 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:10.191 06:04:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:10.191 06:04:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.191 06:04:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.191 ************************************ 00:09:10.191 START TEST skip_rpc_with_delay 00:09:10.191 ************************************ 00:09:10.191 06:04:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:09:10.191 06:04:15 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:10.191 06:04:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:09:10.191 06:04:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:10.191 06:04:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:10.191 06:04:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.191 06:04:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:10.191 06:04:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.191 06:04:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:10.191 06:04:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.191 06:04:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:10.191 06:04:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:10.191 06:04:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:10.451 [2024-11-27 06:04:15.288105] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:10.451 06:04:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:09:10.451 06:04:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:10.451 06:04:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:10.451 06:04:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:10.451 00:09:10.451 real 0m0.122s 00:09:10.451 user 0m0.082s 00:09:10.451 sys 0m0.038s 00:09:10.451 ************************************ 00:09:10.451 END TEST skip_rpc_with_delay 00:09:10.451 ************************************ 00:09:10.451 06:04:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.451 06:04:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:10.451 06:04:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:10.451 06:04:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:10.451 06:04:15 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:10.451 06:04:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:10.451 06:04:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.451 06:04:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.451 ************************************ 00:09:10.451 START TEST exit_on_failed_rpc_init 00:09:10.451 ************************************ 00:09:10.451 06:04:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:09:10.451 06:04:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57368 00:09:10.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.451 06:04:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57368 00:09:10.451 06:04:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57368 ']' 00:09:10.451 06:04:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.451 06:04:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.451 06:04:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:10.451 06:04:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.451 06:04:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.451 06:04:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:10.451 [2024-11-27 06:04:15.446065] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:09:10.451 [2024-11-27 06:04:15.446204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57368 ] 00:09:10.711 [2024-11-27 06:04:15.604911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.711 [2024-11-27 06:04:15.688921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.711 [2024-11-27 06:04:15.769771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:10.971 06:04:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.971 06:04:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:09:10.971 06:04:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:10.971 06:04:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:10.971 06:04:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:09:10.971 06:04:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:10.971 06:04:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:10.971 06:04:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.971 06:04:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:10.971 06:04:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.971 06:04:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:10.971 06:04:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.971 06:04:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:10.971 06:04:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:10.971 06:04:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:10.971 [2024-11-27 06:04:16.062188] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:09:10.971 [2024-11-27 06:04:16.062282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57379 ] 00:09:11.230 [2024-11-27 06:04:16.225887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.230 [2024-11-27 06:04:16.304436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.230 [2024-11-27 06:04:16.304568] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:11.230 [2024-11-27 06:04:16.304587] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:11.230 [2024-11-27 06:04:16.304598] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:11.488 06:04:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:09:11.488 06:04:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:11.488 06:04:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:09:11.489 06:04:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:09:11.489 06:04:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:09:11.489 06:04:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:11.489 06:04:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:11.489 06:04:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57368 00:09:11.489 06:04:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57368 ']' 00:09:11.489 06:04:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57368 00:09:11.489 06:04:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:09:11.489 06:04:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.489 06:04:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57368 00:09:11.489 killing process with pid 57368 00:09:11.489 06:04:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.489 06:04:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.489 06:04:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57368' 00:09:11.489 06:04:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57368 00:09:11.489 06:04:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57368 00:09:11.747 ************************************ 00:09:11.747 END TEST exit_on_failed_rpc_init 00:09:11.747 ************************************ 00:09:11.747 00:09:11.747 real 0m1.451s 00:09:11.747 user 0m1.556s 00:09:11.747 sys 0m0.428s 00:09:11.747 06:04:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.747 06:04:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:12.007 06:04:16 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:12.007 00:09:12.007 real 0m14.146s 00:09:12.007 user 0m13.051s 00:09:12.007 sys 0m1.723s 00:09:12.007 06:04:16 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.007 ************************************ 00:09:12.007 END TEST skip_rpc 00:09:12.007 ************************************ 00:09:12.007 06:04:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.007 06:04:16 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:12.007 06:04:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:12.007 06:04:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.007 06:04:16 -- common/autotest_common.sh@10 -- # set +x 00:09:12.007 ************************************ 00:09:12.007 START TEST rpc_client 00:09:12.007 ************************************ 00:09:12.007 06:04:16 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:12.007 * Looking for test storage... 00:09:12.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:12.007 06:04:17 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:12.007 06:04:17 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:09:12.007 06:04:17 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:12.007 06:04:17 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:12.007 06:04:17 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.007 06:04:17 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.007 06:04:17 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.007 06:04:17 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.007 06:04:17 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.007 06:04:17 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.007 06:04:17 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.007 06:04:17 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.007 06:04:17 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.007 06:04:17 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.007 06:04:17 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.007 06:04:17 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:09:12.007 06:04:17 rpc_client -- scripts/common.sh@345 -- # : 1 00:09:12.007 06:04:17 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.007 06:04:17 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.007 06:04:17 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:09:12.269 06:04:17 rpc_client -- scripts/common.sh@353 -- # local d=1 00:09:12.269 06:04:17 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.269 06:04:17 rpc_client -- scripts/common.sh@355 -- # echo 1 00:09:12.269 06:04:17 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.269 06:04:17 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:09:12.269 06:04:17 rpc_client -- scripts/common.sh@353 -- # local d=2 00:09:12.269 06:04:17 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.269 06:04:17 rpc_client -- scripts/common.sh@355 -- # echo 2 00:09:12.269 06:04:17 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.269 06:04:17 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.269 06:04:17 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.269 06:04:17 rpc_client -- scripts/common.sh@368 -- # return 0 00:09:12.269 06:04:17 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.269 06:04:17 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:12.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.269 --rc genhtml_branch_coverage=1 00:09:12.269 --rc genhtml_function_coverage=1 00:09:12.269 --rc genhtml_legend=1 00:09:12.269 --rc geninfo_all_blocks=1 00:09:12.269 --rc geninfo_unexecuted_blocks=1 00:09:12.269 00:09:12.269 ' 00:09:12.269 06:04:17 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:12.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.269 --rc genhtml_branch_coverage=1 00:09:12.269 --rc genhtml_function_coverage=1 00:09:12.269 --rc genhtml_legend=1 00:09:12.269 --rc geninfo_all_blocks=1 00:09:12.269 --rc geninfo_unexecuted_blocks=1 00:09:12.269 00:09:12.269 ' 00:09:12.269 06:04:17 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:12.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.269 --rc genhtml_branch_coverage=1 00:09:12.269 --rc genhtml_function_coverage=1 00:09:12.269 --rc genhtml_legend=1 00:09:12.269 --rc geninfo_all_blocks=1 00:09:12.269 --rc geninfo_unexecuted_blocks=1 00:09:12.269 00:09:12.269 ' 00:09:12.269 06:04:17 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:12.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.269 --rc genhtml_branch_coverage=1 00:09:12.269 --rc genhtml_function_coverage=1 00:09:12.269 --rc genhtml_legend=1 00:09:12.269 --rc geninfo_all_blocks=1 00:09:12.269 --rc geninfo_unexecuted_blocks=1 00:09:12.269 00:09:12.269 ' 00:09:12.269 06:04:17 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:12.269 OK 00:09:12.269 06:04:17 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:12.269 00:09:12.269 real 0m0.211s 00:09:12.269 user 0m0.126s 00:09:12.269 sys 0m0.094s 00:09:12.269 06:04:17 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.269 ************************************ 00:09:12.269 END TEST rpc_client 00:09:12.269 ************************************ 00:09:12.269 06:04:17 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:12.269 06:04:17 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:12.269 06:04:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:12.269 06:04:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.269 06:04:17 -- common/autotest_common.sh@10 -- # set +x 00:09:12.269 ************************************ 00:09:12.269 START TEST json_config 00:09:12.269 ************************************ 00:09:12.269 06:04:17 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:12.269 06:04:17 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:12.269 06:04:17 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:09:12.269 06:04:17 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:12.269 06:04:17 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:12.269 06:04:17 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.269 06:04:17 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.269 06:04:17 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.269 06:04:17 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.269 06:04:17 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.269 06:04:17 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.269 06:04:17 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.269 06:04:17 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.269 06:04:17 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.269 06:04:17 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.269 06:04:17 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.269 06:04:17 json_config -- scripts/common.sh@344 -- # case "$op" in 00:09:12.269 06:04:17 json_config -- scripts/common.sh@345 -- # : 1 00:09:12.269 06:04:17 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.269 06:04:17 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.269 06:04:17 json_config -- scripts/common.sh@365 -- # decimal 1 00:09:12.269 06:04:17 json_config -- scripts/common.sh@353 -- # local d=1 00:09:12.269 06:04:17 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.269 06:04:17 json_config -- scripts/common.sh@355 -- # echo 1 00:09:12.269 06:04:17 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.269 06:04:17 json_config -- scripts/common.sh@366 -- # decimal 2 00:09:12.269 06:04:17 json_config -- scripts/common.sh@353 -- # local d=2 00:09:12.269 06:04:17 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.269 06:04:17 json_config -- scripts/common.sh@355 -- # echo 2 00:09:12.269 06:04:17 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.269 06:04:17 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.270 06:04:17 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.270 06:04:17 json_config -- scripts/common.sh@368 -- # return 0 00:09:12.270 06:04:17 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.270 06:04:17 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:12.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.270 --rc genhtml_branch_coverage=1 00:09:12.270 --rc genhtml_function_coverage=1 00:09:12.270 --rc genhtml_legend=1 00:09:12.270 --rc geninfo_all_blocks=1 00:09:12.270 --rc geninfo_unexecuted_blocks=1 00:09:12.270 00:09:12.270 ' 00:09:12.270 06:04:17 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:12.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.270 --rc genhtml_branch_coverage=1 00:09:12.270 --rc genhtml_function_coverage=1 00:09:12.270 --rc genhtml_legend=1 00:09:12.270 --rc geninfo_all_blocks=1 00:09:12.270 --rc geninfo_unexecuted_blocks=1 00:09:12.270 00:09:12.270 ' 00:09:12.270 06:04:17 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:12.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.270 --rc genhtml_branch_coverage=1 00:09:12.270 --rc genhtml_function_coverage=1 00:09:12.270 --rc genhtml_legend=1 00:09:12.270 --rc geninfo_all_blocks=1 00:09:12.270 --rc geninfo_unexecuted_blocks=1 00:09:12.270 00:09:12.270 ' 00:09:12.270 06:04:17 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:12.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.270 --rc genhtml_branch_coverage=1 00:09:12.270 --rc genhtml_function_coverage=1 00:09:12.270 --rc genhtml_legend=1 00:09:12.270 --rc geninfo_all_blocks=1 00:09:12.270 --rc geninfo_unexecuted_blocks=1 00:09:12.270 00:09:12.270 ' 00:09:12.270 06:04:17 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:12.270 06:04:17 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:12.528 06:04:17 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.528 06:04:17 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.528 06:04:17 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.528 06:04:17 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.528 06:04:17 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.528 06:04:17 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.528 06:04:17 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.528 06:04:17 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.528 06:04:17 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.528 06:04:17 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.528 06:04:17 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:09:12.528 06:04:17 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:09:12.528 06:04:17 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.528 06:04:17 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.528 06:04:17 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:12.528 06:04:17 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.528 06:04:17 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:12.528 06:04:17 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:09:12.528 06:04:17 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.528 06:04:17 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.528 06:04:17 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.528 06:04:17 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.529 06:04:17 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.529 06:04:17 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.529 06:04:17 json_config -- paths/export.sh@5 -- # export PATH 00:09:12.529 06:04:17 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.529 06:04:17 json_config -- nvmf/common.sh@51 -- # : 0 00:09:12.529 06:04:17 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:12.529 06:04:17 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:12.529 06:04:17 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.529 06:04:17 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.529 06:04:17 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.529 06:04:17 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:12.529 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:12.529 06:04:17 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:12.529 06:04:17 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:12.529 06:04:17 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:12.529 06:04:17 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:12.529 06:04:17 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:12.529 06:04:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:12.529 06:04:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:12.529 06:04:17 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:12.529 06:04:17 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:12.529 06:04:17 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:12.529 06:04:17 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:12.529 06:04:17 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:12.529 06:04:17 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:12.529 06:04:17 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:12.529 06:04:17 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:09:12.529 06:04:17 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:12.529 06:04:17 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:12.529 06:04:17 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:12.529 INFO: JSON configuration test init 00:09:12.529 06:04:17 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:09:12.529 06:04:17 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:09:12.529 06:04:17 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:09:12.529 06:04:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:12.529 06:04:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:12.529 06:04:17 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:09:12.529 06:04:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:12.529 06:04:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:12.529 06:04:17 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:09:12.529 06:04:17 json_config -- json_config/common.sh@9 -- # local app=target 00:09:12.529 06:04:17 json_config -- json_config/common.sh@10 -- # shift 00:09:12.529 06:04:17 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:12.529 06:04:17 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:12.529 06:04:17 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:12.529 06:04:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:12.529 06:04:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:12.529 Waiting for target to run... 00:09:12.529 06:04:17 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57515 00:09:12.529 06:04:17 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:12.529 06:04:17 json_config -- json_config/common.sh@25 -- # waitforlisten 57515 /var/tmp/spdk_tgt.sock 00:09:12.529 06:04:17 json_config -- common/autotest_common.sh@835 -- # '[' -z 57515 ']' 00:09:12.529 06:04:17 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:12.529 06:04:17 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:12.529 06:04:17 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:12.529 06:04:17 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.529 06:04:17 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:12.529 06:04:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:12.529 [2024-11-27 06:04:17.471183] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:09:12.529 [2024-11-27 06:04:17.471291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57515 ] 00:09:13.097 [2024-11-27 06:04:17.918727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.097 [2024-11-27 06:04:17.976853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.666 06:04:18 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.666 00:09:13.666 06:04:18 json_config -- common/autotest_common.sh@868 -- # return 0 00:09:13.666 06:04:18 json_config -- json_config/common.sh@26 -- # echo '' 00:09:13.666 06:04:18 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:09:13.666 06:04:18 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:09:13.666 06:04:18 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:13.666 06:04:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:13.666 06:04:18 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:09:13.666 06:04:18 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:09:13.666 06:04:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:13.666 06:04:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:13.666 06:04:18 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:13.666 06:04:18 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:09:13.666 06:04:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:13.925 [2024-11-27 06:04:18.870154] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:14.183 06:04:19 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:09:14.184 06:04:19 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:14.184 06:04:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:14.184 06:04:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:14.184 06:04:19 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:09:14.184 06:04:19 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:14.184 06:04:19 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:09:14.184 06:04:19 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:09:14.184 06:04:19 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:09:14.184 06:04:19 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:09:14.184 06:04:19 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:09:14.184 06:04:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:14.442 06:04:19 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:09:14.442 06:04:19 json_config -- json_config/json_config.sh@51 -- # local get_types 00:09:14.442 06:04:19 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:09:14.442 06:04:19 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:09:14.442 06:04:19 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:09:14.442 06:04:19 json_config -- json_config/json_config.sh@54 -- # sort 00:09:14.442 06:04:19 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:09:14.442 06:04:19 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:09:14.442 06:04:19 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:09:14.442 06:04:19 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:09:14.442 06:04:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:14.442 06:04:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:14.442 06:04:19 json_config -- json_config/json_config.sh@62 -- # return 0 00:09:14.442 06:04:19 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:09:14.442 06:04:19 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:09:14.442 06:04:19 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:09:14.442 06:04:19 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:09:14.442 06:04:19 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:09:14.442 06:04:19 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:09:14.442 06:04:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:14.442 06:04:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:14.442 06:04:19 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:09:14.442 06:04:19 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:09:14.442 06:04:19 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:09:14.442 06:04:19 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:14.442 06:04:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:14.701 MallocForNvmf0 00:09:14.958 06:04:19 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:14.958 06:04:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:15.216 MallocForNvmf1 00:09:15.216 06:04:20 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:09:15.216 06:04:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:09:15.475 [2024-11-27 06:04:20.397017] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.475 06:04:20 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:15.475 06:04:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:15.733 06:04:20 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:15.733 06:04:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:16.297 06:04:21 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:16.297 06:04:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:16.555 06:04:21 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:16.555 06:04:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:16.812 [2024-11-27 06:04:21.771353] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:16.812 06:04:21 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:09:16.812 06:04:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:16.812 06:04:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:16.812 06:04:21 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:09:16.813 06:04:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:16.813 06:04:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:16.813 06:04:21 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:09:16.813 06:04:21 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:16.813 06:04:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:17.379 MallocBdevForConfigChangeCheck 00:09:17.379 06:04:22 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:09:17.379 06:04:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:17.379 06:04:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:17.379 06:04:22 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:09:17.379 06:04:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:17.637 INFO: shutting down applications... 00:09:17.637 06:04:22 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:09:17.637 06:04:22 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:09:17.637 06:04:22 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:09:17.637 06:04:22 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:09:17.637 06:04:22 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:18.231 Calling clear_iscsi_subsystem 00:09:18.231 Calling clear_nvmf_subsystem 00:09:18.232 Calling clear_nbd_subsystem 00:09:18.232 Calling clear_ublk_subsystem 00:09:18.232 Calling clear_vhost_blk_subsystem 00:09:18.232 Calling clear_vhost_scsi_subsystem 00:09:18.232 Calling clear_bdev_subsystem 00:09:18.232 06:04:23 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:09:18.232 06:04:23 json_config -- json_config/json_config.sh@350 -- # count=100 00:09:18.232 06:04:23 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:09:18.232 06:04:23 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:18.232 06:04:23 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:18.232 06:04:23 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:09:18.490 06:04:23 json_config -- json_config/json_config.sh@352 -- # break 00:09:18.490 06:04:23 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:09:18.490 06:04:23 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:09:18.490 06:04:23 json_config -- json_config/common.sh@31 -- # local app=target 00:09:18.490 06:04:23 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:18.490 06:04:23 json_config -- json_config/common.sh@35 -- # [[ -n 57515 ]] 00:09:18.490 06:04:23 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57515 00:09:18.490 06:04:23 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:18.490 06:04:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:18.490 06:04:23 json_config -- json_config/common.sh@41 -- # kill -0 57515 00:09:18.490 06:04:23 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:19.057 06:04:24 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:19.057 06:04:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:19.057 06:04:24 json_config -- json_config/common.sh@41 -- # kill -0 57515 00:09:19.057 06:04:24 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:19.057 06:04:24 json_config -- json_config/common.sh@43 -- # break 00:09:19.057 SPDK target shutdown done 00:09:19.057 06:04:24 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:19.057 06:04:24 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:19.057 INFO: relaunching applications... 00:09:19.057 06:04:24 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:09:19.057 06:04:24 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:19.057 06:04:24 json_config -- json_config/common.sh@9 -- # local app=target 00:09:19.057 06:04:24 json_config -- json_config/common.sh@10 -- # shift 00:09:19.057 06:04:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:19.057 06:04:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:19.057 06:04:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:19.057 06:04:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:19.057 06:04:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:19.057 06:04:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57725 00:09:19.057 Waiting for target to run... 00:09:19.057 06:04:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:19.057 06:04:24 json_config -- json_config/common.sh@25 -- # waitforlisten 57725 /var/tmp/spdk_tgt.sock 00:09:19.057 06:04:24 json_config -- common/autotest_common.sh@835 -- # '[' -z 57725 ']' 00:09:19.057 06:04:24 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:19.057 06:04:24 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:19.057 06:04:24 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:19.057 06:04:24 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:19.057 06:04:24 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.057 06:04:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:19.057 [2024-11-27 06:04:24.126051] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:09:19.057 [2024-11-27 06:04:24.126188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57725 ] 00:09:19.624 [2024-11-27 06:04:24.698969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.882 [2024-11-27 06:04:24.774799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.883 [2024-11-27 06:04:24.921182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:20.140 [2024-11-27 06:04:25.161996] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.140 [2024-11-27 06:04:25.194178] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:20.399 06:04:25 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.399 00:09:20.399 06:04:25 json_config -- common/autotest_common.sh@868 -- # return 0 00:09:20.399 06:04:25 json_config -- json_config/common.sh@26 -- # echo '' 00:09:20.399 06:04:25 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:09:20.399 INFO: Checking if target configuration is the same... 00:09:20.399 06:04:25 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:20.399 06:04:25 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:20.399 06:04:25 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:09:20.399 06:04:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:20.399 + '[' 2 -ne 2 ']' 00:09:20.399 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:20.399 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:20.399 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:20.399 +++ basename /dev/fd/62 00:09:20.399 ++ mktemp /tmp/62.XXX 00:09:20.399 + tmp_file_1=/tmp/62.rgE 00:09:20.399 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:20.399 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:20.399 + tmp_file_2=/tmp/spdk_tgt_config.json.fdY 00:09:20.399 + ret=0 00:09:20.399 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:20.657 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:20.914 + diff -u /tmp/62.rgE /tmp/spdk_tgt_config.json.fdY 00:09:20.914 + echo 'INFO: JSON config files are the same' 00:09:20.914 INFO: JSON config files are the same 00:09:20.914 + rm /tmp/62.rgE /tmp/spdk_tgt_config.json.fdY 00:09:20.914 + exit 0 00:09:20.914 06:04:25 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:09:20.914 INFO: changing configuration and checking if this can be detected... 00:09:20.914 06:04:25 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:20.914 06:04:25 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:20.914 06:04:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:21.172 06:04:26 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:21.172 06:04:26 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:09:21.172 06:04:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:21.172 + '[' 2 -ne 2 ']' 00:09:21.172 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:21.172 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:21.172 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:21.172 +++ basename /dev/fd/62 00:09:21.172 ++ mktemp /tmp/62.XXX 00:09:21.172 + tmp_file_1=/tmp/62.D7T 00:09:21.172 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:21.172 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:21.172 + tmp_file_2=/tmp/spdk_tgt_config.json.cDp 00:09:21.172 + ret=0 00:09:21.172 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:21.429 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:21.688 + diff -u /tmp/62.D7T /tmp/spdk_tgt_config.json.cDp 00:09:21.688 + ret=1 00:09:21.688 + echo '=== Start of file: /tmp/62.D7T ===' 00:09:21.688 + cat /tmp/62.D7T 00:09:21.688 + echo '=== End of file: /tmp/62.D7T ===' 00:09:21.688 + echo '' 00:09:21.688 + echo '=== Start of file: /tmp/spdk_tgt_config.json.cDp ===' 00:09:21.688 + cat /tmp/spdk_tgt_config.json.cDp 00:09:21.688 + echo '=== End of file: /tmp/spdk_tgt_config.json.cDp ===' 00:09:21.688 + echo '' 00:09:21.688 + rm /tmp/62.D7T /tmp/spdk_tgt_config.json.cDp 00:09:21.688 + exit 1 00:09:21.688 INFO: configuration change detected. 00:09:21.688 06:04:26 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:09:21.688 06:04:26 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:09:21.688 06:04:26 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:09:21.688 06:04:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:21.688 06:04:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:21.688 06:04:26 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:09:21.688 06:04:26 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:09:21.688 06:04:26 json_config -- json_config/json_config.sh@324 -- # [[ -n 57725 ]] 00:09:21.688 06:04:26 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:09:21.688 06:04:26 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:09:21.688 06:04:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:21.688 06:04:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:21.688 06:04:26 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:09:21.688 06:04:26 json_config -- json_config/json_config.sh@200 -- # uname -s 00:09:21.688 06:04:26 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:09:21.688 06:04:26 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:09:21.688 06:04:26 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:09:21.688 06:04:26 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:09:21.688 06:04:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:21.688 06:04:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:21.688 06:04:26 json_config -- json_config/json_config.sh@330 -- # killprocess 57725 00:09:21.688 06:04:26 json_config -- common/autotest_common.sh@954 -- # '[' -z 57725 ']' 00:09:21.688 06:04:26 json_config -- common/autotest_common.sh@958 -- # kill -0 57725 00:09:21.688 06:04:26 json_config -- common/autotest_common.sh@959 -- # uname 00:09:21.688 06:04:26 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.688 06:04:26 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57725 00:09:21.688 killing process with pid 57725 00:09:21.688 06:04:26 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.688 06:04:26 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.688 06:04:26 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57725' 00:09:21.688 06:04:26 json_config -- common/autotest_common.sh@973 -- # kill 57725 00:09:21.688 06:04:26 json_config -- common/autotest_common.sh@978 -- # wait 57725 00:09:21.946 06:04:26 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:21.946 06:04:26 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:09:21.946 06:04:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:21.946 06:04:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:21.946 06:04:26 json_config -- json_config/json_config.sh@335 -- # return 0 00:09:21.946 INFO: Success 00:09:21.946 06:04:26 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:09:21.946 00:09:21.946 real 0m9.786s 00:09:21.946 user 0m14.346s 00:09:21.946 sys 0m2.099s 00:09:21.946 06:04:26 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.946 06:04:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:21.946 ************************************ 00:09:21.946 END TEST json_config 00:09:21.946 ************************************ 00:09:21.946 06:04:27 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:21.946 06:04:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:21.946 06:04:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.946 06:04:27 -- common/autotest_common.sh@10 -- # set +x 00:09:21.946 ************************************ 00:09:21.946 START TEST json_config_extra_key 00:09:21.946 ************************************ 00:09:21.946 06:04:27 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:22.205 06:04:27 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:22.205 06:04:27 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:09:22.205 06:04:27 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:22.205 06:04:27 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:22.205 06:04:27 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.205 06:04:27 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.205 06:04:27 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.205 06:04:27 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.205 06:04:27 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.205 06:04:27 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.205 06:04:27 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.205 06:04:27 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.205 06:04:27 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.205 06:04:27 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.205 06:04:27 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.205 06:04:27 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:09:22.205 06:04:27 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:09:22.205 06:04:27 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.205 06:04:27 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.205 06:04:27 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:09:22.205 06:04:27 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:09:22.205 06:04:27 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.205 06:04:27 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:09:22.206 06:04:27 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.206 06:04:27 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:09:22.206 06:04:27 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:09:22.206 06:04:27 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.206 06:04:27 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:09:22.206 06:04:27 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.206 06:04:27 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.206 06:04:27 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.206 06:04:27 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:09:22.206 06:04:27 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.206 06:04:27 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:22.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.206 --rc genhtml_branch_coverage=1 00:09:22.206 --rc genhtml_function_coverage=1 00:09:22.206 --rc genhtml_legend=1 00:09:22.206 --rc geninfo_all_blocks=1 00:09:22.206 --rc geninfo_unexecuted_blocks=1 00:09:22.206 00:09:22.206 ' 00:09:22.206 06:04:27 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:22.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.206 --rc genhtml_branch_coverage=1 00:09:22.206 --rc genhtml_function_coverage=1 00:09:22.206 --rc genhtml_legend=1 00:09:22.206 --rc geninfo_all_blocks=1 00:09:22.206 --rc geninfo_unexecuted_blocks=1 00:09:22.206 00:09:22.206 ' 00:09:22.206 06:04:27 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:22.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.206 --rc genhtml_branch_coverage=1 00:09:22.206 --rc genhtml_function_coverage=1 00:09:22.206 --rc genhtml_legend=1 00:09:22.206 --rc geninfo_all_blocks=1 00:09:22.206 --rc geninfo_unexecuted_blocks=1 00:09:22.206 00:09:22.206 ' 00:09:22.206 06:04:27 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:22.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.206 --rc genhtml_branch_coverage=1 00:09:22.206 --rc genhtml_function_coverage=1 00:09:22.206 --rc genhtml_legend=1 00:09:22.206 --rc geninfo_all_blocks=1 00:09:22.206 --rc geninfo_unexecuted_blocks=1 00:09:22.206 00:09:22.206 ' 00:09:22.206 06:04:27 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:22.206 06:04:27 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:09:22.206 06:04:27 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.206 06:04:27 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.206 06:04:27 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.206 06:04:27 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.206 06:04:27 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.206 06:04:27 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.206 06:04:27 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:22.206 06:04:27 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:22.206 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:22.206 06:04:27 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:22.206 06:04:27 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:22.206 06:04:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:22.206 06:04:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:22.206 06:04:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:22.206 06:04:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:22.206 06:04:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:22.206 06:04:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:22.206 06:04:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:22.206 06:04:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:22.206 06:04:27 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:22.206 06:04:27 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:22.206 INFO: launching applications... 00:09:22.206 06:04:27 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:22.206 06:04:27 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:22.206 06:04:27 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:22.206 06:04:27 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:22.206 06:04:27 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:22.206 06:04:27 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:22.206 06:04:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:22.206 06:04:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:22.206 06:04:27 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57879 00:09:22.206 06:04:27 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:22.206 Waiting for target to run... 00:09:22.206 06:04:27 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57879 /var/tmp/spdk_tgt.sock 00:09:22.206 06:04:27 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:22.206 06:04:27 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57879 ']' 00:09:22.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:22.206 06:04:27 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:22.206 06:04:27 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.206 06:04:27 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:22.206 06:04:27 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.206 06:04:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:22.464 [2024-11-27 06:04:27.300376] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:09:22.464 [2024-11-27 06:04:27.300490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57879 ] 00:09:22.721 [2024-11-27 06:04:27.746162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.979 [2024-11-27 06:04:27.817063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.979 [2024-11-27 06:04:27.851110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:23.238 00:09:23.238 INFO: shutting down applications... 00:09:23.238 06:04:28 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.238 06:04:28 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:09:23.238 06:04:28 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:23.238 06:04:28 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:23.238 06:04:28 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:23.238 06:04:28 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:23.238 06:04:28 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:23.238 06:04:28 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57879 ]] 00:09:23.238 06:04:28 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57879 00:09:23.238 06:04:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:23.238 06:04:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:23.238 06:04:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57879 00:09:23.238 06:04:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:23.805 06:04:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:23.805 06:04:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:23.805 06:04:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57879 00:09:23.805 06:04:28 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:23.805 06:04:28 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:23.805 SPDK target shutdown done 00:09:23.805 Success 00:09:23.805 06:04:28 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:23.805 06:04:28 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:23.805 06:04:28 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:23.805 00:09:23.805 real 0m1.791s 00:09:23.805 user 0m1.669s 00:09:23.805 sys 0m0.484s 00:09:23.805 ************************************ 00:09:23.805 END TEST json_config_extra_key 00:09:23.805 ************************************ 00:09:23.805 06:04:28 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.805 06:04:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:23.805 06:04:28 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:23.805 06:04:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:23.805 06:04:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.805 06:04:28 -- common/autotest_common.sh@10 -- # set +x 00:09:23.805 ************************************ 00:09:23.805 START TEST alias_rpc 00:09:23.805 ************************************ 00:09:23.805 06:04:28 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:24.063 * Looking for test storage... 00:09:24.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:24.064 06:04:28 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:24.064 06:04:28 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:24.064 06:04:28 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:24.064 06:04:29 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@345 -- # : 1 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.064 06:04:29 alias_rpc -- scripts/common.sh@368 -- # return 0 00:09:24.064 06:04:29 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.064 06:04:29 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:24.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.064 --rc genhtml_branch_coverage=1 00:09:24.064 --rc genhtml_function_coverage=1 00:09:24.064 --rc genhtml_legend=1 00:09:24.064 --rc geninfo_all_blocks=1 00:09:24.064 --rc geninfo_unexecuted_blocks=1 00:09:24.064 00:09:24.064 ' 00:09:24.064 06:04:29 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:24.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.064 --rc genhtml_branch_coverage=1 00:09:24.064 --rc genhtml_function_coverage=1 00:09:24.064 --rc genhtml_legend=1 00:09:24.064 --rc geninfo_all_blocks=1 00:09:24.064 --rc geninfo_unexecuted_blocks=1 00:09:24.064 00:09:24.064 ' 00:09:24.064 06:04:29 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:24.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.064 --rc genhtml_branch_coverage=1 00:09:24.064 --rc genhtml_function_coverage=1 00:09:24.064 --rc genhtml_legend=1 00:09:24.064 --rc geninfo_all_blocks=1 00:09:24.064 --rc geninfo_unexecuted_blocks=1 00:09:24.064 00:09:24.064 ' 00:09:24.064 06:04:29 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:24.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.064 --rc genhtml_branch_coverage=1 00:09:24.064 --rc genhtml_function_coverage=1 00:09:24.064 --rc genhtml_legend=1 00:09:24.064 --rc geninfo_all_blocks=1 00:09:24.064 --rc geninfo_unexecuted_blocks=1 00:09:24.064 00:09:24.064 ' 00:09:24.064 06:04:29 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:24.064 06:04:29 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57957 00:09:24.064 06:04:29 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57957 00:09:24.064 06:04:29 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:24.064 06:04:29 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57957 ']' 00:09:24.064 06:04:29 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.064 06:04:29 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.064 06:04:29 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.064 06:04:29 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.064 06:04:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.064 [2024-11-27 06:04:29.136224] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:09:24.064 [2024-11-27 06:04:29.136608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57957 ] 00:09:24.323 [2024-11-27 06:04:29.285827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.323 [2024-11-27 06:04:29.352101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.582 [2024-11-27 06:04:29.428339] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:24.582 06:04:29 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.582 06:04:29 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:24.582 06:04:29 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:25.148 06:04:29 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57957 00:09:25.148 06:04:29 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57957 ']' 00:09:25.148 06:04:29 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57957 00:09:25.148 06:04:29 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:09:25.148 06:04:29 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.148 06:04:29 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57957 00:09:25.148 killing process with pid 57957 00:09:25.148 06:04:30 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.148 06:04:30 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.148 06:04:30 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57957' 00:09:25.148 06:04:30 alias_rpc -- common/autotest_common.sh@973 -- # kill 57957 00:09:25.148 06:04:30 alias_rpc -- common/autotest_common.sh@978 -- # wait 57957 00:09:25.459 ************************************ 00:09:25.459 END TEST alias_rpc 00:09:25.459 ************************************ 00:09:25.459 00:09:25.459 real 0m1.553s 00:09:25.459 user 0m1.643s 00:09:25.459 sys 0m0.458s 00:09:25.459 06:04:30 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.459 06:04:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.459 06:04:30 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:09:25.459 06:04:30 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:25.459 06:04:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.459 06:04:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.459 06:04:30 -- common/autotest_common.sh@10 -- # set +x 00:09:25.459 ************************************ 00:09:25.459 START TEST spdkcli_tcp 00:09:25.459 ************************************ 00:09:25.459 06:04:30 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:25.459 * Looking for test storage... 00:09:25.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:25.718 06:04:30 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:25.718 06:04:30 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:25.718 06:04:30 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:25.718 06:04:30 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:25.718 06:04:30 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.718 06:04:30 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.718 06:04:30 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.718 06:04:30 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.719 06:04:30 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.719 06:04:30 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.719 06:04:30 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.719 06:04:30 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.719 06:04:30 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.719 06:04:30 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.719 06:04:30 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.719 06:04:30 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:25.719 06:04:30 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:09:25.719 06:04:30 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.719 06:04:30 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.719 06:04:30 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:25.719 06:04:30 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:09:25.719 06:04:30 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.719 06:04:30 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:09:25.719 06:04:30 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.719 06:04:30 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:25.719 06:04:30 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:09:25.719 06:04:30 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.719 06:04:30 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:09:25.719 06:04:30 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.719 06:04:30 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.719 06:04:30 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.719 06:04:30 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:09:25.719 06:04:30 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.719 06:04:30 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:25.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.719 --rc genhtml_branch_coverage=1 00:09:25.719 --rc genhtml_function_coverage=1 00:09:25.719 --rc genhtml_legend=1 00:09:25.719 --rc geninfo_all_blocks=1 00:09:25.719 --rc geninfo_unexecuted_blocks=1 00:09:25.719 00:09:25.719 ' 00:09:25.719 06:04:30 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:25.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.719 --rc genhtml_branch_coverage=1 00:09:25.719 --rc genhtml_function_coverage=1 00:09:25.719 --rc genhtml_legend=1 00:09:25.719 --rc geninfo_all_blocks=1 00:09:25.719 --rc geninfo_unexecuted_blocks=1 00:09:25.719 00:09:25.719 ' 00:09:25.719 06:04:30 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:25.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.719 --rc genhtml_branch_coverage=1 00:09:25.719 --rc genhtml_function_coverage=1 00:09:25.719 --rc genhtml_legend=1 00:09:25.719 --rc geninfo_all_blocks=1 00:09:25.719 --rc geninfo_unexecuted_blocks=1 00:09:25.719 00:09:25.719 ' 00:09:25.719 06:04:30 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:25.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.719 --rc genhtml_branch_coverage=1 00:09:25.719 --rc genhtml_function_coverage=1 00:09:25.719 --rc genhtml_legend=1 00:09:25.719 --rc geninfo_all_blocks=1 00:09:25.719 --rc geninfo_unexecuted_blocks=1 00:09:25.719 00:09:25.719 ' 00:09:25.719 06:04:30 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:25.719 06:04:30 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:25.719 06:04:30 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:25.719 06:04:30 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:25.719 06:04:30 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:25.719 06:04:30 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:25.719 06:04:30 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:25.719 06:04:30 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:25.719 06:04:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:25.719 06:04:30 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58028 00:09:25.719 06:04:30 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:25.719 06:04:30 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58028 00:09:25.719 06:04:30 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58028 ']' 00:09:25.719 06:04:30 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.719 06:04:30 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.719 06:04:30 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.719 06:04:30 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.719 06:04:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:25.719 [2024-11-27 06:04:30.750543] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:09:25.719 [2024-11-27 06:04:30.750933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58028 ] 00:09:25.978 [2024-11-27 06:04:30.903790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:25.978 [2024-11-27 06:04:31.002916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.978 [2024-11-27 06:04:31.002935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.237 [2024-11-27 06:04:31.116743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:26.804 06:04:31 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.804 06:04:31 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:09:26.804 06:04:31 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58045 00:09:26.804 06:04:31 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:26.804 06:04:31 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:27.062 [ 00:09:27.062 "bdev_malloc_delete", 00:09:27.062 "bdev_malloc_create", 00:09:27.062 "bdev_null_resize", 00:09:27.062 "bdev_null_delete", 00:09:27.062 "bdev_null_create", 00:09:27.062 "bdev_nvme_cuse_unregister", 00:09:27.062 "bdev_nvme_cuse_register", 00:09:27.062 "bdev_opal_new_user", 00:09:27.062 "bdev_opal_set_lock_state", 00:09:27.062 "bdev_opal_delete", 00:09:27.062 "bdev_opal_get_info", 00:09:27.062 "bdev_opal_create", 00:09:27.062 "bdev_nvme_opal_revert", 00:09:27.062 "bdev_nvme_opal_init", 00:09:27.062 "bdev_nvme_send_cmd", 00:09:27.062 "bdev_nvme_set_keys", 00:09:27.062 "bdev_nvme_get_path_iostat", 00:09:27.062 "bdev_nvme_get_mdns_discovery_info", 00:09:27.062 "bdev_nvme_stop_mdns_discovery", 00:09:27.062 "bdev_nvme_start_mdns_discovery", 00:09:27.062 "bdev_nvme_set_multipath_policy", 00:09:27.062 "bdev_nvme_set_preferred_path", 00:09:27.062 "bdev_nvme_get_io_paths", 00:09:27.062 "bdev_nvme_remove_error_injection", 00:09:27.062 "bdev_nvme_add_error_injection", 00:09:27.062 "bdev_nvme_get_discovery_info", 00:09:27.062 "bdev_nvme_stop_discovery", 00:09:27.062 "bdev_nvme_start_discovery", 00:09:27.062 "bdev_nvme_get_controller_health_info", 00:09:27.062 "bdev_nvme_disable_controller", 00:09:27.062 "bdev_nvme_enable_controller", 00:09:27.062 "bdev_nvme_reset_controller", 00:09:27.062 "bdev_nvme_get_transport_statistics", 00:09:27.062 "bdev_nvme_apply_firmware", 00:09:27.062 "bdev_nvme_detach_controller", 00:09:27.062 "bdev_nvme_get_controllers", 00:09:27.062 "bdev_nvme_attach_controller", 00:09:27.062 "bdev_nvme_set_hotplug", 00:09:27.062 "bdev_nvme_set_options", 00:09:27.062 "bdev_passthru_delete", 00:09:27.062 "bdev_passthru_create", 00:09:27.062 "bdev_lvol_set_parent_bdev", 00:09:27.062 "bdev_lvol_set_parent", 00:09:27.062 "bdev_lvol_check_shallow_copy", 00:09:27.062 "bdev_lvol_start_shallow_copy", 00:09:27.062 "bdev_lvol_grow_lvstore", 00:09:27.062 "bdev_lvol_get_lvols", 00:09:27.062 "bdev_lvol_get_lvstores", 00:09:27.062 "bdev_lvol_delete", 00:09:27.062 "bdev_lvol_set_read_only", 00:09:27.062 "bdev_lvol_resize", 00:09:27.062 "bdev_lvol_decouple_parent", 00:09:27.062 "bdev_lvol_inflate", 00:09:27.062 "bdev_lvol_rename", 00:09:27.062 "bdev_lvol_clone_bdev", 00:09:27.062 "bdev_lvol_clone", 00:09:27.062 "bdev_lvol_snapshot", 00:09:27.063 "bdev_lvol_create", 00:09:27.063 "bdev_lvol_delete_lvstore", 00:09:27.063 "bdev_lvol_rename_lvstore", 00:09:27.063 "bdev_lvol_create_lvstore", 00:09:27.063 "bdev_raid_set_options", 00:09:27.063 "bdev_raid_remove_base_bdev", 00:09:27.063 "bdev_raid_add_base_bdev", 00:09:27.063 "bdev_raid_delete", 00:09:27.063 "bdev_raid_create", 00:09:27.063 "bdev_raid_get_bdevs", 00:09:27.063 "bdev_error_inject_error", 00:09:27.063 "bdev_error_delete", 00:09:27.063 "bdev_error_create", 00:09:27.063 "bdev_split_delete", 00:09:27.063 "bdev_split_create", 00:09:27.063 "bdev_delay_delete", 00:09:27.063 "bdev_delay_create", 00:09:27.063 "bdev_delay_update_latency", 00:09:27.063 "bdev_zone_block_delete", 00:09:27.063 "bdev_zone_block_create", 00:09:27.063 "blobfs_create", 00:09:27.063 "blobfs_detect", 00:09:27.063 "blobfs_set_cache_size", 00:09:27.063 "bdev_aio_delete", 00:09:27.063 "bdev_aio_rescan", 00:09:27.063 "bdev_aio_create", 00:09:27.063 "bdev_ftl_set_property", 00:09:27.063 "bdev_ftl_get_properties", 00:09:27.063 "bdev_ftl_get_stats", 00:09:27.063 "bdev_ftl_unmap", 00:09:27.063 "bdev_ftl_unload", 00:09:27.063 "bdev_ftl_delete", 00:09:27.063 "bdev_ftl_load", 00:09:27.063 "bdev_ftl_create", 00:09:27.063 "bdev_virtio_attach_controller", 00:09:27.063 "bdev_virtio_scsi_get_devices", 00:09:27.063 "bdev_virtio_detach_controller", 00:09:27.063 "bdev_virtio_blk_set_hotplug", 00:09:27.063 "bdev_iscsi_delete", 00:09:27.063 "bdev_iscsi_create", 00:09:27.063 "bdev_iscsi_set_options", 00:09:27.063 "bdev_uring_delete", 00:09:27.063 "bdev_uring_rescan", 00:09:27.063 "bdev_uring_create", 00:09:27.063 "accel_error_inject_error", 00:09:27.063 "ioat_scan_accel_module", 00:09:27.063 "dsa_scan_accel_module", 00:09:27.063 "iaa_scan_accel_module", 00:09:27.063 "keyring_file_remove_key", 00:09:27.063 "keyring_file_add_key", 00:09:27.063 "keyring_linux_set_options", 00:09:27.063 "fsdev_aio_delete", 00:09:27.063 "fsdev_aio_create", 00:09:27.063 "iscsi_get_histogram", 00:09:27.063 "iscsi_enable_histogram", 00:09:27.063 "iscsi_set_options", 00:09:27.063 "iscsi_get_auth_groups", 00:09:27.063 "iscsi_auth_group_remove_secret", 00:09:27.063 "iscsi_auth_group_add_secret", 00:09:27.063 "iscsi_delete_auth_group", 00:09:27.063 "iscsi_create_auth_group", 00:09:27.063 "iscsi_set_discovery_auth", 00:09:27.063 "iscsi_get_options", 00:09:27.063 "iscsi_target_node_request_logout", 00:09:27.063 "iscsi_target_node_set_redirect", 00:09:27.063 "iscsi_target_node_set_auth", 00:09:27.063 "iscsi_target_node_add_lun", 00:09:27.063 "iscsi_get_stats", 00:09:27.063 "iscsi_get_connections", 00:09:27.063 "iscsi_portal_group_set_auth", 00:09:27.063 "iscsi_start_portal_group", 00:09:27.063 "iscsi_delete_portal_group", 00:09:27.063 "iscsi_create_portal_group", 00:09:27.063 "iscsi_get_portal_groups", 00:09:27.063 "iscsi_delete_target_node", 00:09:27.063 "iscsi_target_node_remove_pg_ig_maps", 00:09:27.063 "iscsi_target_node_add_pg_ig_maps", 00:09:27.063 "iscsi_create_target_node", 00:09:27.063 "iscsi_get_target_nodes", 00:09:27.063 "iscsi_delete_initiator_group", 00:09:27.063 "iscsi_initiator_group_remove_initiators", 00:09:27.063 "iscsi_initiator_group_add_initiators", 00:09:27.063 "iscsi_create_initiator_group", 00:09:27.063 "iscsi_get_initiator_groups", 00:09:27.063 "nvmf_set_crdt", 00:09:27.063 "nvmf_set_config", 00:09:27.063 "nvmf_set_max_subsystems", 00:09:27.063 "nvmf_stop_mdns_prr", 00:09:27.063 "nvmf_publish_mdns_prr", 00:09:27.063 "nvmf_subsystem_get_listeners", 00:09:27.063 "nvmf_subsystem_get_qpairs", 00:09:27.063 "nvmf_subsystem_get_controllers", 00:09:27.063 "nvmf_get_stats", 00:09:27.063 "nvmf_get_transports", 00:09:27.063 "nvmf_create_transport", 00:09:27.063 "nvmf_get_targets", 00:09:27.063 "nvmf_delete_target", 00:09:27.063 "nvmf_create_target", 00:09:27.063 "nvmf_subsystem_allow_any_host", 00:09:27.063 "nvmf_subsystem_set_keys", 00:09:27.063 "nvmf_subsystem_remove_host", 00:09:27.063 "nvmf_subsystem_add_host", 00:09:27.063 "nvmf_ns_remove_host", 00:09:27.063 "nvmf_ns_add_host", 00:09:27.063 "nvmf_subsystem_remove_ns", 00:09:27.063 "nvmf_subsystem_set_ns_ana_group", 00:09:27.063 "nvmf_subsystem_add_ns", 00:09:27.063 "nvmf_subsystem_listener_set_ana_state", 00:09:27.063 "nvmf_discovery_get_referrals", 00:09:27.063 "nvmf_discovery_remove_referral", 00:09:27.063 "nvmf_discovery_add_referral", 00:09:27.063 "nvmf_subsystem_remove_listener", 00:09:27.063 "nvmf_subsystem_add_listener", 00:09:27.063 "nvmf_delete_subsystem", 00:09:27.063 "nvmf_create_subsystem", 00:09:27.063 "nvmf_get_subsystems", 00:09:27.063 "env_dpdk_get_mem_stats", 00:09:27.063 "nbd_get_disks", 00:09:27.063 "nbd_stop_disk", 00:09:27.063 "nbd_start_disk", 00:09:27.063 "ublk_recover_disk", 00:09:27.063 "ublk_get_disks", 00:09:27.063 "ublk_stop_disk", 00:09:27.063 "ublk_start_disk", 00:09:27.063 "ublk_destroy_target", 00:09:27.063 "ublk_create_target", 00:09:27.063 "virtio_blk_create_transport", 00:09:27.063 "virtio_blk_get_transports", 00:09:27.063 "vhost_controller_set_coalescing", 00:09:27.063 "vhost_get_controllers", 00:09:27.063 "vhost_delete_controller", 00:09:27.063 "vhost_create_blk_controller", 00:09:27.063 "vhost_scsi_controller_remove_target", 00:09:27.063 "vhost_scsi_controller_add_target", 00:09:27.063 "vhost_start_scsi_controller", 00:09:27.063 "vhost_create_scsi_controller", 00:09:27.063 "thread_set_cpumask", 00:09:27.063 "scheduler_set_options", 00:09:27.063 "framework_get_governor", 00:09:27.063 "framework_get_scheduler", 00:09:27.063 "framework_set_scheduler", 00:09:27.063 "framework_get_reactors", 00:09:27.063 "thread_get_io_channels", 00:09:27.063 "thread_get_pollers", 00:09:27.063 "thread_get_stats", 00:09:27.063 "framework_monitor_context_switch", 00:09:27.063 "spdk_kill_instance", 00:09:27.063 "log_enable_timestamps", 00:09:27.063 "log_get_flags", 00:09:27.063 "log_clear_flag", 00:09:27.063 "log_set_flag", 00:09:27.063 "log_get_level", 00:09:27.063 "log_set_level", 00:09:27.063 "log_get_print_level", 00:09:27.063 "log_set_print_level", 00:09:27.063 "framework_enable_cpumask_locks", 00:09:27.063 "framework_disable_cpumask_locks", 00:09:27.063 "framework_wait_init", 00:09:27.063 "framework_start_init", 00:09:27.063 "scsi_get_devices", 00:09:27.063 "bdev_get_histogram", 00:09:27.063 "bdev_enable_histogram", 00:09:27.063 "bdev_set_qos_limit", 00:09:27.063 "bdev_set_qd_sampling_period", 00:09:27.063 "bdev_get_bdevs", 00:09:27.063 "bdev_reset_iostat", 00:09:27.063 "bdev_get_iostat", 00:09:27.063 "bdev_examine", 00:09:27.063 "bdev_wait_for_examine", 00:09:27.063 "bdev_set_options", 00:09:27.063 "accel_get_stats", 00:09:27.063 "accel_set_options", 00:09:27.063 "accel_set_driver", 00:09:27.063 "accel_crypto_key_destroy", 00:09:27.063 "accel_crypto_keys_get", 00:09:27.063 "accel_crypto_key_create", 00:09:27.063 "accel_assign_opc", 00:09:27.063 "accel_get_module_info", 00:09:27.063 "accel_get_opc_assignments", 00:09:27.063 "vmd_rescan", 00:09:27.063 "vmd_remove_device", 00:09:27.063 "vmd_enable", 00:09:27.063 "sock_get_default_impl", 00:09:27.063 "sock_set_default_impl", 00:09:27.063 "sock_impl_set_options", 00:09:27.063 "sock_impl_get_options", 00:09:27.063 "iobuf_get_stats", 00:09:27.063 "iobuf_set_options", 00:09:27.063 "keyring_get_keys", 00:09:27.063 "framework_get_pci_devices", 00:09:27.063 "framework_get_config", 00:09:27.063 "framework_get_subsystems", 00:09:27.063 "fsdev_set_opts", 00:09:27.063 "fsdev_get_opts", 00:09:27.063 "trace_get_info", 00:09:27.063 "trace_get_tpoint_group_mask", 00:09:27.063 "trace_disable_tpoint_group", 00:09:27.063 "trace_enable_tpoint_group", 00:09:27.063 "trace_clear_tpoint_mask", 00:09:27.063 "trace_set_tpoint_mask", 00:09:27.063 "notify_get_notifications", 00:09:27.063 "notify_get_types", 00:09:27.063 "spdk_get_version", 00:09:27.063 "rpc_get_methods" 00:09:27.063 ] 00:09:27.063 06:04:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:27.063 06:04:32 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:27.063 06:04:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:27.063 06:04:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:27.063 06:04:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58028 00:09:27.322 06:04:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58028 ']' 00:09:27.322 06:04:32 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58028 00:09:27.322 06:04:32 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:09:27.322 06:04:32 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.322 06:04:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58028 00:09:27.322 killing process with pid 58028 00:09:27.322 06:04:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:27.322 06:04:32 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:27.322 06:04:32 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58028' 00:09:27.322 06:04:32 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58028 00:09:27.322 06:04:32 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58028 00:09:27.888 ************************************ 00:09:27.888 END TEST spdkcli_tcp 00:09:27.888 ************************************ 00:09:27.888 00:09:27.888 real 0m2.314s 00:09:27.888 user 0m4.218s 00:09:27.888 sys 0m0.649s 00:09:27.888 06:04:32 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.888 06:04:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:27.888 06:04:32 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:27.888 06:04:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:27.888 06:04:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.888 06:04:32 -- common/autotest_common.sh@10 -- # set +x 00:09:27.888 ************************************ 00:09:27.888 START TEST dpdk_mem_utility 00:09:27.888 ************************************ 00:09:27.888 06:04:32 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:27.888 * Looking for test storage... 00:09:27.888 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:27.888 06:04:32 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:27.888 06:04:32 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:09:27.888 06:04:32 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:28.147 06:04:32 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:28.147 06:04:32 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.147 06:04:32 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.147 06:04:32 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.147 06:04:32 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.147 06:04:32 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.147 06:04:32 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.147 06:04:32 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.147 06:04:32 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.147 06:04:32 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.147 06:04:32 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.147 06:04:32 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.147 06:04:32 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:09:28.147 06:04:32 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:09:28.147 06:04:32 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.147 06:04:32 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.147 06:04:32 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:09:28.147 06:04:32 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:09:28.147 06:04:32 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.147 06:04:32 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:09:28.147 06:04:33 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.147 06:04:33 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:09:28.147 06:04:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:09:28.147 06:04:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.147 06:04:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:09:28.147 06:04:33 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.147 06:04:33 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.147 06:04:33 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.148 06:04:33 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:09:28.148 06:04:33 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.148 06:04:33 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:28.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.148 --rc genhtml_branch_coverage=1 00:09:28.148 --rc genhtml_function_coverage=1 00:09:28.148 --rc genhtml_legend=1 00:09:28.148 --rc geninfo_all_blocks=1 00:09:28.148 --rc geninfo_unexecuted_blocks=1 00:09:28.148 00:09:28.148 ' 00:09:28.148 06:04:33 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:28.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.148 --rc genhtml_branch_coverage=1 00:09:28.148 --rc genhtml_function_coverage=1 00:09:28.148 --rc genhtml_legend=1 00:09:28.148 --rc geninfo_all_blocks=1 00:09:28.148 --rc geninfo_unexecuted_blocks=1 00:09:28.148 00:09:28.148 ' 00:09:28.148 06:04:33 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:28.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.148 --rc genhtml_branch_coverage=1 00:09:28.148 --rc genhtml_function_coverage=1 00:09:28.148 --rc genhtml_legend=1 00:09:28.148 --rc geninfo_all_blocks=1 00:09:28.148 --rc geninfo_unexecuted_blocks=1 00:09:28.148 00:09:28.148 ' 00:09:28.148 06:04:33 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:28.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.148 --rc genhtml_branch_coverage=1 00:09:28.148 --rc genhtml_function_coverage=1 00:09:28.148 --rc genhtml_legend=1 00:09:28.148 --rc geninfo_all_blocks=1 00:09:28.148 --rc geninfo_unexecuted_blocks=1 00:09:28.148 00:09:28.148 ' 00:09:28.148 06:04:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:28.148 06:04:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58131 00:09:28.148 06:04:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:28.148 06:04:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58131 00:09:28.148 06:04:33 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58131 ']' 00:09:28.148 06:04:33 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.148 06:04:33 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.148 06:04:33 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.148 06:04:33 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.148 06:04:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:28.148 [2024-11-27 06:04:33.096299] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:09:28.148 [2024-11-27 06:04:33.096793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58131 ] 00:09:28.406 [2024-11-27 06:04:33.251357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.406 [2024-11-27 06:04:33.349072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.406 [2024-11-27 06:04:33.466104] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:29.342 06:04:34 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.342 06:04:34 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:09:29.342 06:04:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:29.342 06:04:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:29.342 06:04:34 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.342 06:04:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:29.342 { 00:09:29.342 "filename": "/tmp/spdk_mem_dump.txt" 00:09:29.342 } 00:09:29.342 06:04:34 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.342 06:04:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:29.342 DPDK memory size 818.000000 MiB in 1 heap(s) 00:09:29.342 1 heaps totaling size 818.000000 MiB 00:09:29.342 size: 818.000000 MiB heap id: 0 00:09:29.342 end heaps---------- 00:09:29.342 9 mempools totaling size 603.782043 MiB 00:09:29.342 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:29.342 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:29.342 size: 100.555481 MiB name: bdev_io_58131 00:09:29.342 size: 50.003479 MiB name: msgpool_58131 00:09:29.342 size: 36.509338 MiB name: fsdev_io_58131 00:09:29.342 size: 21.763794 MiB name: PDU_Pool 00:09:29.342 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:29.342 size: 4.133484 MiB name: evtpool_58131 00:09:29.342 size: 0.026123 MiB name: Session_Pool 00:09:29.342 end mempools------- 00:09:29.342 6 memzones totaling size 4.142822 MiB 00:09:29.342 size: 1.000366 MiB name: RG_ring_0_58131 00:09:29.342 size: 1.000366 MiB name: RG_ring_1_58131 00:09:29.342 size: 1.000366 MiB name: RG_ring_4_58131 00:09:29.343 size: 1.000366 MiB name: RG_ring_5_58131 00:09:29.343 size: 0.125366 MiB name: RG_ring_2_58131 00:09:29.343 size: 0.015991 MiB name: RG_ring_3_58131 00:09:29.343 end memzones------- 00:09:29.343 06:04:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:29.343 heap id: 0 total size: 818.000000 MiB number of busy elements: 316 number of free elements: 15 00:09:29.343 list of free elements. size: 10.802673 MiB 00:09:29.343 element at address: 0x200019200000 with size: 0.999878 MiB 00:09:29.343 element at address: 0x200019400000 with size: 0.999878 MiB 00:09:29.343 element at address: 0x200032000000 with size: 0.994446 MiB 00:09:29.343 element at address: 0x200000400000 with size: 0.993958 MiB 00:09:29.343 element at address: 0x200006400000 with size: 0.959839 MiB 00:09:29.343 element at address: 0x200012c00000 with size: 0.944275 MiB 00:09:29.343 element at address: 0x200019600000 with size: 0.936584 MiB 00:09:29.343 element at address: 0x200000200000 with size: 0.717346 MiB 00:09:29.343 element at address: 0x20001ae00000 with size: 0.567871 MiB 00:09:29.343 element at address: 0x20000a600000 with size: 0.488892 MiB 00:09:29.343 element at address: 0x200000c00000 with size: 0.486267 MiB 00:09:29.343 element at address: 0x200019800000 with size: 0.485657 MiB 00:09:29.343 element at address: 0x200003e00000 with size: 0.480286 MiB 00:09:29.343 element at address: 0x200028200000 with size: 0.395752 MiB 00:09:29.343 element at address: 0x200000800000 with size: 0.351746 MiB 00:09:29.343 list of standard malloc elements. size: 199.268433 MiB 00:09:29.343 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:09:29.343 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:09:29.343 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:29.343 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:09:29.343 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:09:29.343 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:29.343 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:09:29.343 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:29.343 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:09:29.343 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:09:29.343 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x20000085e580 with size: 0.000183 MiB 00:09:29.343 element at address: 0x20000087e840 with size: 0.000183 MiB 00:09:29.343 element at address: 0x20000087e900 with size: 0.000183 MiB 00:09:29.343 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:09:29.343 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:09:29.343 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:09:29.343 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:09:29.343 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:09:29.343 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:09:29.343 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x20000087f080 with size: 0.000183 MiB 00:09:29.343 element at address: 0x20000087f140 with size: 0.000183 MiB 00:09:29.343 element at address: 0x20000087f200 with size: 0.000183 MiB 00:09:29.343 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x20000087f380 with size: 0.000183 MiB 00:09:29.343 element at address: 0x20000087f440 with size: 0.000183 MiB 00:09:29.343 element at address: 0x20000087f500 with size: 0.000183 MiB 00:09:29.343 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x20000087f680 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:09:29.343 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000cff000 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:09:29.343 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:09:29.344 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:09:29.344 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x200003efb980 with size: 0.000183 MiB 00:09:29.344 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:09:29.344 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:09:29.344 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:09:29.344 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:09:29.344 element at address: 0x200028265500 with size: 0.000183 MiB 00:09:29.344 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826c480 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826c540 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826c600 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826c780 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826c840 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826c900 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826d080 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826d140 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826d200 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826d380 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826d440 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826d500 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826d680 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826d740 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826d800 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826d980 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826da40 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826db00 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826de00 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826df80 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826e040 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826e100 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826e280 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826e340 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826e400 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826e580 with size: 0.000183 MiB 00:09:29.344 element at address: 0x20002826e640 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826e700 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826e880 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826e940 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826f000 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826f180 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826f240 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826f300 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826f480 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826f540 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826f600 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826f780 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826f840 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826f900 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:09:29.345 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:09:29.345 list of memzone associated elements. size: 607.928894 MiB 00:09:29.345 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:09:29.345 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:29.345 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:09:29.345 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:29.345 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:09:29.345 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58131_0 00:09:29.345 element at address: 0x200000dff380 with size: 48.003052 MiB 00:09:29.345 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58131_0 00:09:29.345 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:09:29.345 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58131_0 00:09:29.345 element at address: 0x2000199be940 with size: 20.255554 MiB 00:09:29.345 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:29.345 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:09:29.345 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:29.345 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:09:29.345 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58131_0 00:09:29.345 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:09:29.345 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58131 00:09:29.345 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:29.345 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58131 00:09:29.345 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:09:29.345 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:29.345 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:09:29.345 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:29.345 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:09:29.345 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:29.345 element at address: 0x200003efba40 with size: 1.008118 MiB 00:09:29.345 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:29.345 element at address: 0x200000cff180 with size: 1.000488 MiB 00:09:29.345 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58131 00:09:29.345 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:09:29.345 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58131 00:09:29.345 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:09:29.345 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58131 00:09:29.345 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:09:29.345 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58131 00:09:29.345 element at address: 0x20000087f740 with size: 0.500488 MiB 00:09:29.345 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58131 00:09:29.345 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:09:29.345 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58131 00:09:29.345 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:09:29.345 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:29.345 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:09:29.345 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:29.345 element at address: 0x20001987c540 with size: 0.250488 MiB 00:09:29.345 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:29.345 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:09:29.345 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58131 00:09:29.345 element at address: 0x20000085e640 with size: 0.125488 MiB 00:09:29.345 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58131 00:09:29.345 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:09:29.345 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:29.345 element at address: 0x200028265680 with size: 0.023743 MiB 00:09:29.345 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:29.345 element at address: 0x20000085a380 with size: 0.016113 MiB 00:09:29.345 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58131 00:09:29.345 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:09:29.345 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:29.345 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:09:29.345 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58131 00:09:29.345 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:09:29.345 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58131 00:09:29.345 element at address: 0x20000085a180 with size: 0.000305 MiB 00:09:29.345 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58131 00:09:29.345 element at address: 0x20002826c280 with size: 0.000305 MiB 00:09:29.345 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:29.345 06:04:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:29.345 06:04:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58131 00:09:29.345 06:04:34 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58131 ']' 00:09:29.345 06:04:34 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58131 00:09:29.345 06:04:34 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:09:29.345 06:04:34 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.345 06:04:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58131 00:09:29.345 06:04:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.345 06:04:34 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.345 06:04:34 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58131' 00:09:29.345 killing process with pid 58131 00:09:29.345 06:04:34 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58131 00:09:29.345 06:04:34 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58131 00:09:29.911 00:09:29.911 real 0m2.162s 00:09:29.911 user 0m2.280s 00:09:29.911 sys 0m0.624s 00:09:29.911 06:04:34 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.911 06:04:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:29.911 ************************************ 00:09:29.911 END TEST dpdk_mem_utility 00:09:29.911 ************************************ 00:09:30.172 06:04:35 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:30.172 06:04:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.172 06:04:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.172 06:04:35 -- common/autotest_common.sh@10 -- # set +x 00:09:30.172 ************************************ 00:09:30.172 START TEST event 00:09:30.172 ************************************ 00:09:30.172 06:04:35 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:30.172 * Looking for test storage... 00:09:30.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:30.172 06:04:35 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:30.172 06:04:35 event -- common/autotest_common.sh@1693 -- # lcov --version 00:09:30.172 06:04:35 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:30.172 06:04:35 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:30.172 06:04:35 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.172 06:04:35 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.172 06:04:35 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.172 06:04:35 event -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.172 06:04:35 event -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.172 06:04:35 event -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.172 06:04:35 event -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.172 06:04:35 event -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.172 06:04:35 event -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.172 06:04:35 event -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.172 06:04:35 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.172 06:04:35 event -- scripts/common.sh@344 -- # case "$op" in 00:09:30.172 06:04:35 event -- scripts/common.sh@345 -- # : 1 00:09:30.172 06:04:35 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.172 06:04:35 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.172 06:04:35 event -- scripts/common.sh@365 -- # decimal 1 00:09:30.172 06:04:35 event -- scripts/common.sh@353 -- # local d=1 00:09:30.172 06:04:35 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.172 06:04:35 event -- scripts/common.sh@355 -- # echo 1 00:09:30.172 06:04:35 event -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.172 06:04:35 event -- scripts/common.sh@366 -- # decimal 2 00:09:30.172 06:04:35 event -- scripts/common.sh@353 -- # local d=2 00:09:30.172 06:04:35 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.172 06:04:35 event -- scripts/common.sh@355 -- # echo 2 00:09:30.172 06:04:35 event -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.172 06:04:35 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.172 06:04:35 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.172 06:04:35 event -- scripts/common.sh@368 -- # return 0 00:09:30.172 06:04:35 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.172 06:04:35 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:30.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.172 --rc genhtml_branch_coverage=1 00:09:30.172 --rc genhtml_function_coverage=1 00:09:30.172 --rc genhtml_legend=1 00:09:30.172 --rc geninfo_all_blocks=1 00:09:30.172 --rc geninfo_unexecuted_blocks=1 00:09:30.172 00:09:30.172 ' 00:09:30.172 06:04:35 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:30.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.172 --rc genhtml_branch_coverage=1 00:09:30.172 --rc genhtml_function_coverage=1 00:09:30.172 --rc genhtml_legend=1 00:09:30.172 --rc geninfo_all_blocks=1 00:09:30.172 --rc geninfo_unexecuted_blocks=1 00:09:30.172 00:09:30.172 ' 00:09:30.172 06:04:35 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:30.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.172 --rc genhtml_branch_coverage=1 00:09:30.172 --rc genhtml_function_coverage=1 00:09:30.172 --rc genhtml_legend=1 00:09:30.172 --rc geninfo_all_blocks=1 00:09:30.172 --rc geninfo_unexecuted_blocks=1 00:09:30.173 00:09:30.173 ' 00:09:30.173 06:04:35 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:30.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.173 --rc genhtml_branch_coverage=1 00:09:30.173 --rc genhtml_function_coverage=1 00:09:30.173 --rc genhtml_legend=1 00:09:30.173 --rc geninfo_all_blocks=1 00:09:30.173 --rc geninfo_unexecuted_blocks=1 00:09:30.173 00:09:30.173 ' 00:09:30.173 06:04:35 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:30.173 06:04:35 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:30.173 06:04:35 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:30.173 06:04:35 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:30.173 06:04:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.173 06:04:35 event -- common/autotest_common.sh@10 -- # set +x 00:09:30.173 ************************************ 00:09:30.173 START TEST event_perf 00:09:30.173 ************************************ 00:09:30.173 06:04:35 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:30.455 Running I/O for 1 seconds...[2024-11-27 06:04:35.266465] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:09:30.455 [2024-11-27 06:04:35.266588] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58223 ] 00:09:30.455 [2024-11-27 06:04:35.426636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.455 [2024-11-27 06:04:35.521477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.455 [2024-11-27 06:04:35.521575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.455 [2024-11-27 06:04:35.521826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.455 [2024-11-27 06:04:35.521831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.829 Running I/O for 1 seconds... 00:09:31.829 lcore 0: 110782 00:09:31.829 lcore 1: 110776 00:09:31.829 lcore 2: 110778 00:09:31.829 lcore 3: 110779 00:09:31.829 done. 00:09:31.829 00:09:31.829 real 0m1.340s 00:09:31.829 user 0m4.141s 00:09:31.829 sys 0m0.066s 00:09:31.829 06:04:36 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.829 06:04:36 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:31.829 ************************************ 00:09:31.829 END TEST event_perf 00:09:31.829 ************************************ 00:09:31.829 06:04:36 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:31.829 06:04:36 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:31.829 06:04:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.829 06:04:36 event -- common/autotest_common.sh@10 -- # set +x 00:09:31.829 ************************************ 00:09:31.829 START TEST event_reactor 00:09:31.829 ************************************ 00:09:31.829 06:04:36 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:31.829 [2024-11-27 06:04:36.646249] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:09:31.829 [2024-11-27 06:04:36.646584] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58256 ] 00:09:31.829 [2024-11-27 06:04:36.791926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.829 [2024-11-27 06:04:36.862339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.200 test_start 00:09:33.200 oneshot 00:09:33.200 tick 100 00:09:33.200 tick 100 00:09:33.200 tick 250 00:09:33.200 tick 100 00:09:33.200 tick 100 00:09:33.200 tick 100 00:09:33.200 tick 250 00:09:33.200 tick 500 00:09:33.200 tick 100 00:09:33.200 tick 100 00:09:33.200 tick 250 00:09:33.200 tick 100 00:09:33.200 tick 100 00:09:33.200 test_end 00:09:33.200 ************************************ 00:09:33.200 END TEST event_reactor 00:09:33.200 ************************************ 00:09:33.200 00:09:33.200 real 0m1.286s 00:09:33.200 user 0m1.131s 00:09:33.200 sys 0m0.046s 00:09:33.200 06:04:37 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.200 06:04:37 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:33.200 06:04:37 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:33.200 06:04:37 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:33.200 06:04:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.200 06:04:37 event -- common/autotest_common.sh@10 -- # set +x 00:09:33.200 ************************************ 00:09:33.200 START TEST event_reactor_perf 00:09:33.200 ************************************ 00:09:33.200 06:04:37 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:33.200 [2024-11-27 06:04:37.992167] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:09:33.200 [2024-11-27 06:04:37.992267] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58286 ] 00:09:33.200 [2024-11-27 06:04:38.141308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.200 [2024-11-27 06:04:38.210730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.573 test_start 00:09:34.573 test_end 00:09:34.573 Performance: 371371 events per second 00:09:34.573 00:09:34.573 real 0m1.295s 00:09:34.573 user 0m1.139s 00:09:34.573 sys 0m0.049s 00:09:34.573 06:04:39 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.573 06:04:39 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:34.573 ************************************ 00:09:34.573 END TEST event_reactor_perf 00:09:34.573 ************************************ 00:09:34.573 06:04:39 event -- event/event.sh@49 -- # uname -s 00:09:34.573 06:04:39 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:34.573 06:04:39 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:34.573 06:04:39 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.573 06:04:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.573 06:04:39 event -- common/autotest_common.sh@10 -- # set +x 00:09:34.573 ************************************ 00:09:34.573 START TEST event_scheduler 00:09:34.573 ************************************ 00:09:34.573 06:04:39 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:34.573 * Looking for test storage... 00:09:34.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:34.573 06:04:39 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:34.573 06:04:39 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:09:34.573 06:04:39 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:34.573 06:04:39 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.573 06:04:39 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:34.573 06:04:39 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.573 06:04:39 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:34.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.573 --rc genhtml_branch_coverage=1 00:09:34.573 --rc genhtml_function_coverage=1 00:09:34.573 --rc genhtml_legend=1 00:09:34.573 --rc geninfo_all_blocks=1 00:09:34.573 --rc geninfo_unexecuted_blocks=1 00:09:34.573 00:09:34.573 ' 00:09:34.573 06:04:39 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:34.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.573 --rc genhtml_branch_coverage=1 00:09:34.573 --rc genhtml_function_coverage=1 00:09:34.573 --rc genhtml_legend=1 00:09:34.573 --rc geninfo_all_blocks=1 00:09:34.573 --rc geninfo_unexecuted_blocks=1 00:09:34.573 00:09:34.573 ' 00:09:34.573 06:04:39 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:34.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.573 --rc genhtml_branch_coverage=1 00:09:34.573 --rc genhtml_function_coverage=1 00:09:34.573 --rc genhtml_legend=1 00:09:34.573 --rc geninfo_all_blocks=1 00:09:34.573 --rc geninfo_unexecuted_blocks=1 00:09:34.573 00:09:34.573 ' 00:09:34.573 06:04:39 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:34.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.573 --rc genhtml_branch_coverage=1 00:09:34.573 --rc genhtml_function_coverage=1 00:09:34.573 --rc genhtml_legend=1 00:09:34.573 --rc geninfo_all_blocks=1 00:09:34.573 --rc geninfo_unexecuted_blocks=1 00:09:34.573 00:09:34.573 ' 00:09:34.573 06:04:39 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:34.573 06:04:39 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58361 00:09:34.573 06:04:39 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:34.573 06:04:39 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:34.573 06:04:39 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58361 00:09:34.573 06:04:39 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58361 ']' 00:09:34.573 06:04:39 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.573 06:04:39 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.573 06:04:39 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.573 06:04:39 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.573 06:04:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:34.573 [2024-11-27 06:04:39.585519] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:09:34.573 [2024-11-27 06:04:39.586250] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58361 ] 00:09:34.865 [2024-11-27 06:04:39.738707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.865 [2024-11-27 06:04:39.820649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.865 [2024-11-27 06:04:39.820776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.865 [2024-11-27 06:04:39.821695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.865 [2024-11-27 06:04:39.821728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.865 06:04:39 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.865 06:04:39 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:09:34.865 06:04:39 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:34.865 06:04:39 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.865 06:04:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:34.865 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:34.865 POWER: Cannot set governor of lcore 0 to userspace 00:09:34.865 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:34.865 POWER: Cannot set governor of lcore 0 to performance 00:09:34.865 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:34.865 POWER: Cannot set governor of lcore 0 to userspace 00:09:34.865 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:34.865 POWER: Cannot set governor of lcore 0 to userspace 00:09:34.865 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:09:34.865 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:34.865 POWER: Unable to set Power Management Environment for lcore 0 00:09:34.865 [2024-11-27 06:04:39.900026] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:09:34.865 [2024-11-27 06:04:39.900218] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:09:34.865 [2024-11-27 06:04:39.900378] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:34.865 [2024-11-27 06:04:39.900511] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:34.865 [2024-11-27 06:04:39.900682] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:34.865 [2024-11-27 06:04:39.900800] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:34.865 06:04:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.865 06:04:39 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:34.865 06:04:39 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.865 06:04:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:35.124 [2024-11-27 06:04:39.967600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:35.124 [2024-11-27 06:04:40.013254] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:35.124 06:04:40 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.124 06:04:40 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:35.124 06:04:40 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.124 06:04:40 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.124 06:04:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:35.124 ************************************ 00:09:35.124 START TEST scheduler_create_thread 00:09:35.124 ************************************ 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.124 2 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.124 3 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.124 4 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.124 5 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.124 6 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.124 7 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.124 8 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.124 9 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.124 10 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:35.124 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.125 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.125 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.125 06:04:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:35.125 06:04:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:35.125 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.125 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.689 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.689 06:04:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:35.689 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.689 06:04:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:37.065 06:04:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.065 06:04:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:37.065 06:04:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:37.065 06:04:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.065 06:04:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:38.440 06:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.440 00:09:38.440 real 0m3.098s 00:09:38.440 user 0m0.017s 00:09:38.440 sys 0m0.009s 00:09:38.440 ************************************ 00:09:38.440 END TEST scheduler_create_thread 00:09:38.440 ************************************ 00:09:38.440 06:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.440 06:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:38.440 06:04:43 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:38.440 06:04:43 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58361 00:09:38.440 06:04:43 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58361 ']' 00:09:38.440 06:04:43 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58361 00:09:38.440 06:04:43 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:09:38.440 06:04:43 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.440 06:04:43 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58361 00:09:38.440 06:04:43 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:38.440 killing process with pid 58361 00:09:38.440 06:04:43 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:38.440 06:04:43 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58361' 00:09:38.440 06:04:43 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58361 00:09:38.440 06:04:43 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58361 00:09:38.440 [2024-11-27 06:04:43.503890] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:39.010 00:09:39.010 real 0m4.506s 00:09:39.010 user 0m7.324s 00:09:39.010 sys 0m0.383s 00:09:39.010 ************************************ 00:09:39.010 END TEST event_scheduler 00:09:39.010 ************************************ 00:09:39.010 06:04:43 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.010 06:04:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:39.010 06:04:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:39.010 06:04:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:39.010 06:04:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.010 06:04:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.010 06:04:43 event -- common/autotest_common.sh@10 -- # set +x 00:09:39.010 ************************************ 00:09:39.010 START TEST app_repeat 00:09:39.010 ************************************ 00:09:39.010 06:04:43 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:09:39.010 06:04:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.010 06:04:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:39.010 06:04:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:39.010 06:04:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:39.010 06:04:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:39.010 06:04:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:39.010 06:04:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:39.010 06:04:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58453 00:09:39.010 Process app_repeat pid: 58453 00:09:39.010 06:04:43 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:39.010 06:04:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:39.010 06:04:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58453' 00:09:39.010 spdk_app_start Round 0 00:09:39.010 06:04:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:39.010 06:04:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:39.010 06:04:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58453 /var/tmp/spdk-nbd.sock 00:09:39.010 06:04:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58453 ']' 00:09:39.010 06:04:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:39.010 06:04:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:39.010 06:04:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:39.010 06:04:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.010 06:04:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:39.011 [2024-11-27 06:04:43.918909] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:09:39.011 [2024-11-27 06:04:43.919043] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58453 ] 00:09:39.011 [2024-11-27 06:04:44.073086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:39.270 [2024-11-27 06:04:44.145224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.270 [2024-11-27 06:04:44.145237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.270 [2024-11-27 06:04:44.202669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:39.270 06:04:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.270 06:04:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:39.270 06:04:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:39.532 Malloc0 00:09:39.800 06:04:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:40.058 Malloc1 00:09:40.058 06:04:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:40.058 06:04:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:40.058 06:04:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:40.058 06:04:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:40.058 06:04:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:40.058 06:04:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:40.058 06:04:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:40.058 06:04:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:40.058 06:04:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:40.058 06:04:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:40.058 06:04:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:40.058 06:04:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:40.058 06:04:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:40.058 06:04:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:40.058 06:04:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:40.058 06:04:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:40.317 /dev/nbd0 00:09:40.317 06:04:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:40.317 06:04:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:40.317 06:04:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:40.317 06:04:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:40.317 06:04:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:40.317 06:04:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:40.317 06:04:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:40.317 06:04:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:40.317 06:04:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:40.317 06:04:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:40.317 06:04:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:40.317 1+0 records in 00:09:40.317 1+0 records out 00:09:40.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262037 s, 15.6 MB/s 00:09:40.317 06:04:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:40.317 06:04:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:40.317 06:04:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:40.317 06:04:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:40.317 06:04:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:40.317 06:04:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:40.317 06:04:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:40.317 06:04:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:40.574 /dev/nbd1 00:09:40.574 06:04:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:40.575 06:04:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:40.575 06:04:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:40.575 06:04:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:40.575 06:04:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:40.575 06:04:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:40.575 06:04:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:40.575 06:04:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:40.575 06:04:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:40.575 06:04:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:40.575 06:04:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:40.833 1+0 records in 00:09:40.833 1+0 records out 00:09:40.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282576 s, 14.5 MB/s 00:09:40.833 06:04:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:40.833 06:04:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:40.833 06:04:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:40.833 06:04:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:40.833 06:04:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:40.833 06:04:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:40.833 06:04:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:40.833 06:04:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:40.833 06:04:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:40.833 06:04:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:41.091 06:04:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:41.091 { 00:09:41.091 "nbd_device": "/dev/nbd0", 00:09:41.091 "bdev_name": "Malloc0" 00:09:41.091 }, 00:09:41.091 { 00:09:41.091 "nbd_device": "/dev/nbd1", 00:09:41.091 "bdev_name": "Malloc1" 00:09:41.091 } 00:09:41.091 ]' 00:09:41.091 06:04:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:41.091 { 00:09:41.091 "nbd_device": "/dev/nbd0", 00:09:41.091 "bdev_name": "Malloc0" 00:09:41.091 }, 00:09:41.091 { 00:09:41.091 "nbd_device": "/dev/nbd1", 00:09:41.091 "bdev_name": "Malloc1" 00:09:41.091 } 00:09:41.091 ]' 00:09:41.091 06:04:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:41.091 06:04:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:41.091 /dev/nbd1' 00:09:41.091 06:04:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:41.091 /dev/nbd1' 00:09:41.091 06:04:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:41.091 06:04:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:41.091 06:04:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:41.091 06:04:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:41.091 06:04:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:41.091 06:04:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:41.091 06:04:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:41.091 06:04:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:41.091 06:04:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:41.091 06:04:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:41.091 06:04:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:41.091 06:04:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:41.091 256+0 records in 00:09:41.091 256+0 records out 00:09:41.091 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00640906 s, 164 MB/s 00:09:41.091 06:04:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:41.091 06:04:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:41.091 256+0 records in 00:09:41.091 256+0 records out 00:09:41.091 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227494 s, 46.1 MB/s 00:09:41.091 06:04:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:41.091 06:04:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:41.091 256+0 records in 00:09:41.091 256+0 records out 00:09:41.091 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240312 s, 43.6 MB/s 00:09:41.091 06:04:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:41.091 06:04:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:41.091 06:04:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:41.091 06:04:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:41.091 06:04:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:41.091 06:04:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:41.091 06:04:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:41.091 06:04:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:41.091 06:04:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:41.091 06:04:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:41.091 06:04:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:41.091 06:04:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:41.091 06:04:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:41.091 06:04:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:41.091 06:04:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:41.091 06:04:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:41.092 06:04:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:41.092 06:04:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:41.092 06:04:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:41.350 06:04:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:41.350 06:04:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:41.350 06:04:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:41.350 06:04:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.350 06:04:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.350 06:04:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:41.350 06:04:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:41.350 06:04:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.350 06:04:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:41.350 06:04:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:41.608 06:04:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:41.608 06:04:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:41.608 06:04:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:41.608 06:04:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.608 06:04:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.608 06:04:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:41.608 06:04:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:41.608 06:04:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.608 06:04:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:41.608 06:04:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:41.608 06:04:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:42.174 06:04:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:42.174 06:04:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:42.174 06:04:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:42.174 06:04:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:42.174 06:04:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:42.174 06:04:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:42.174 06:04:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:42.174 06:04:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:42.174 06:04:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:42.174 06:04:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:42.174 06:04:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:42.174 06:04:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:42.174 06:04:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:42.432 06:04:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:42.692 [2024-11-27 06:04:47.576782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:42.692 [2024-11-27 06:04:47.636293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.692 [2024-11-27 06:04:47.636303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.692 [2024-11-27 06:04:47.690552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:42.692 [2024-11-27 06:04:47.690653] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:42.692 [2024-11-27 06:04:47.690667] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:45.978 06:04:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:45.978 spdk_app_start Round 1 00:09:45.978 06:04:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:45.978 06:04:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58453 /var/tmp/spdk-nbd.sock 00:09:45.978 06:04:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58453 ']' 00:09:45.978 06:04:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:45.978 06:04:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:45.978 06:04:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:45.978 06:04:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.978 06:04:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:45.978 06:04:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.978 06:04:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:45.978 06:04:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:45.978 Malloc0 00:09:45.978 06:04:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:46.237 Malloc1 00:09:46.237 06:04:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:46.237 06:04:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.237 06:04:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:46.237 06:04:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:46.237 06:04:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:46.237 06:04:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:46.237 06:04:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:46.237 06:04:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.237 06:04:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:46.237 06:04:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:46.237 06:04:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:46.237 06:04:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:46.237 06:04:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:46.237 06:04:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:46.237 06:04:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:46.237 06:04:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:46.805 /dev/nbd0 00:09:46.805 06:04:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:46.805 06:04:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:46.805 06:04:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:46.805 06:04:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:46.805 06:04:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:46.805 06:04:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:46.805 06:04:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:46.805 06:04:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:46.805 06:04:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:46.805 06:04:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:46.805 06:04:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:46.805 1+0 records in 00:09:46.805 1+0 records out 00:09:46.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023603 s, 17.4 MB/s 00:09:46.805 06:04:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:46.805 06:04:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:46.805 06:04:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:46.805 06:04:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:46.805 06:04:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:46.805 06:04:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:46.805 06:04:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:46.805 06:04:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:47.063 /dev/nbd1 00:09:47.064 06:04:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:47.064 06:04:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:47.064 06:04:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:47.064 06:04:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:47.064 06:04:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:47.064 06:04:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:47.064 06:04:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:47.064 06:04:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:47.064 06:04:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:47.064 06:04:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:47.064 06:04:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:47.064 1+0 records in 00:09:47.064 1+0 records out 00:09:47.064 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378956 s, 10.8 MB/s 00:09:47.064 06:04:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:47.064 06:04:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:47.064 06:04:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:47.064 06:04:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:47.064 06:04:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:47.064 06:04:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:47.064 06:04:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:47.064 06:04:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:47.064 06:04:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.064 06:04:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:47.322 06:04:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:47.322 { 00:09:47.322 "nbd_device": "/dev/nbd0", 00:09:47.322 "bdev_name": "Malloc0" 00:09:47.322 }, 00:09:47.322 { 00:09:47.322 "nbd_device": "/dev/nbd1", 00:09:47.322 "bdev_name": "Malloc1" 00:09:47.322 } 00:09:47.322 ]' 00:09:47.322 06:04:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:47.322 { 00:09:47.322 "nbd_device": "/dev/nbd0", 00:09:47.322 "bdev_name": "Malloc0" 00:09:47.322 }, 00:09:47.322 { 00:09:47.322 "nbd_device": "/dev/nbd1", 00:09:47.322 "bdev_name": "Malloc1" 00:09:47.322 } 00:09:47.322 ]' 00:09:47.322 06:04:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:47.322 06:04:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:47.322 /dev/nbd1' 00:09:47.322 06:04:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:47.322 /dev/nbd1' 00:09:47.322 06:04:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:47.322 06:04:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:47.322 06:04:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:47.322 06:04:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:47.322 06:04:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:47.322 06:04:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:47.322 06:04:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:47.322 06:04:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:47.322 06:04:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:47.322 06:04:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:47.322 06:04:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:47.322 06:04:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:47.322 256+0 records in 00:09:47.322 256+0 records out 00:09:47.322 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00955689 s, 110 MB/s 00:09:47.322 06:04:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:47.322 06:04:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:47.322 256+0 records in 00:09:47.322 256+0 records out 00:09:47.322 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223353 s, 46.9 MB/s 00:09:47.322 06:04:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:47.322 06:04:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:47.580 256+0 records in 00:09:47.580 256+0 records out 00:09:47.580 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238896 s, 43.9 MB/s 00:09:47.580 06:04:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:47.580 06:04:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:47.580 06:04:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:47.580 06:04:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:47.581 06:04:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:47.581 06:04:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:47.581 06:04:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:47.581 06:04:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:47.581 06:04:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:47.581 06:04:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:47.581 06:04:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:47.581 06:04:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:47.581 06:04:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:47.581 06:04:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.581 06:04:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:47.581 06:04:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:47.581 06:04:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:47.581 06:04:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:47.581 06:04:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:47.839 06:04:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:47.839 06:04:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:47.839 06:04:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:47.839 06:04:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.839 06:04:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.839 06:04:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:47.839 06:04:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:47.839 06:04:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:47.839 06:04:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:47.839 06:04:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:48.097 06:04:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:48.097 06:04:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:48.097 06:04:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:48.097 06:04:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:48.097 06:04:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:48.097 06:04:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:48.097 06:04:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:48.097 06:04:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:48.097 06:04:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:48.097 06:04:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.097 06:04:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:48.355 06:04:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:48.355 06:04:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:48.355 06:04:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:48.355 06:04:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:48.355 06:04:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:48.355 06:04:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:48.613 06:04:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:48.613 06:04:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:48.613 06:04:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:48.613 06:04:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:48.613 06:04:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:48.613 06:04:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:48.613 06:04:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:48.885 06:04:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:48.885 [2024-11-27 06:04:53.966230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:49.173 [2024-11-27 06:04:54.030819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.173 [2024-11-27 06:04:54.030828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.173 [2024-11-27 06:04:54.086965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:49.173 [2024-11-27 06:04:54.087080] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:49.173 [2024-11-27 06:04:54.087096] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:52.462 spdk_app_start Round 2 00:09:52.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:52.462 06:04:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:52.462 06:04:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:52.462 06:04:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58453 /var/tmp/spdk-nbd.sock 00:09:52.462 06:04:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58453 ']' 00:09:52.462 06:04:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:52.462 06:04:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.462 06:04:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:52.462 06:04:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.462 06:04:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:52.462 06:04:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.462 06:04:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:52.462 06:04:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:52.462 Malloc0 00:09:52.462 06:04:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:52.720 Malloc1 00:09:52.720 06:04:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:52.720 06:04:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.720 06:04:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:52.720 06:04:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:52.720 06:04:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:52.720 06:04:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:52.720 06:04:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:52.720 06:04:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.720 06:04:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:52.720 06:04:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:52.720 06:04:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:52.720 06:04:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:52.720 06:04:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:52.720 06:04:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:52.720 06:04:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:52.720 06:04:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:52.978 /dev/nbd0 00:09:52.978 06:04:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:52.978 06:04:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:52.978 06:04:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:52.978 06:04:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:52.978 06:04:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:52.978 06:04:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:52.978 06:04:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:52.978 06:04:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:52.978 06:04:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:52.978 06:04:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:52.978 06:04:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:52.978 1+0 records in 00:09:52.978 1+0 records out 00:09:52.978 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306895 s, 13.3 MB/s 00:09:52.978 06:04:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:52.978 06:04:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:52.978 06:04:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:52.978 06:04:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:52.978 06:04:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:52.978 06:04:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:52.978 06:04:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:52.978 06:04:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:53.546 /dev/nbd1 00:09:53.546 06:04:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:53.546 06:04:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:53.546 06:04:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:53.546 06:04:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:53.546 06:04:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:53.546 06:04:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:53.546 06:04:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:53.546 06:04:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:53.546 06:04:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:53.546 06:04:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:53.546 06:04:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:53.546 1+0 records in 00:09:53.546 1+0 records out 00:09:53.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469836 s, 8.7 MB/s 00:09:53.546 06:04:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:53.546 06:04:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:53.546 06:04:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:53.546 06:04:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:53.546 06:04:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:53.547 06:04:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:53.547 06:04:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:53.547 06:04:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:53.547 06:04:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.547 06:04:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:53.805 06:04:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:53.805 { 00:09:53.805 "nbd_device": "/dev/nbd0", 00:09:53.805 "bdev_name": "Malloc0" 00:09:53.805 }, 00:09:53.805 { 00:09:53.805 "nbd_device": "/dev/nbd1", 00:09:53.805 "bdev_name": "Malloc1" 00:09:53.805 } 00:09:53.805 ]' 00:09:53.805 06:04:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:53.805 { 00:09:53.805 "nbd_device": "/dev/nbd0", 00:09:53.805 "bdev_name": "Malloc0" 00:09:53.805 }, 00:09:53.805 { 00:09:53.805 "nbd_device": "/dev/nbd1", 00:09:53.805 "bdev_name": "Malloc1" 00:09:53.805 } 00:09:53.805 ]' 00:09:53.805 06:04:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:53.805 06:04:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:53.805 /dev/nbd1' 00:09:53.805 06:04:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:53.805 /dev/nbd1' 00:09:53.805 06:04:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:53.805 06:04:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:53.805 06:04:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:53.805 06:04:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:53.805 06:04:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:53.805 06:04:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:53.805 06:04:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:53.805 06:04:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:53.805 06:04:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:53.805 06:04:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:53.805 06:04:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:53.805 06:04:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:53.805 256+0 records in 00:09:53.805 256+0 records out 00:09:53.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00667922 s, 157 MB/s 00:09:53.805 06:04:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:53.805 06:04:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:53.805 256+0 records in 00:09:53.805 256+0 records out 00:09:53.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021838 s, 48.0 MB/s 00:09:53.805 06:04:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:53.805 06:04:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:54.064 256+0 records in 00:09:54.064 256+0 records out 00:09:54.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289743 s, 36.2 MB/s 00:09:54.064 06:04:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:54.064 06:04:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:54.064 06:04:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:54.064 06:04:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:54.064 06:04:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:54.064 06:04:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:54.064 06:04:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:54.064 06:04:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:54.064 06:04:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:54.064 06:04:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:54.064 06:04:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:54.064 06:04:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:54.064 06:04:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:54.064 06:04:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.064 06:04:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:54.064 06:04:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:54.064 06:04:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:54.064 06:04:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:54.064 06:04:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:54.321 06:04:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:54.321 06:04:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:54.321 06:04:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:54.321 06:04:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:54.321 06:04:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:54.321 06:04:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:54.321 06:04:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:54.321 06:04:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:54.321 06:04:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:54.321 06:04:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:54.579 06:04:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:54.579 06:04:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:54.579 06:04:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:54.579 06:04:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:54.579 06:04:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:54.579 06:04:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:54.579 06:04:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:54.579 06:04:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:54.579 06:04:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:54.579 06:04:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.579 06:04:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:54.837 06:04:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:54.837 06:04:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:54.837 06:04:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:54.837 06:04:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:54.837 06:04:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:54.837 06:04:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:54.837 06:04:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:54.837 06:04:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:54.837 06:04:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:54.837 06:04:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:54.837 06:04:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:54.837 06:04:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:54.837 06:04:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:55.095 06:05:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:55.354 [2024-11-27 06:05:00.345355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:55.354 [2024-11-27 06:05:00.427469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.354 [2024-11-27 06:05:00.427480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.613 [2024-11-27 06:05:00.481677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:55.613 [2024-11-27 06:05:00.481773] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:55.613 [2024-11-27 06:05:00.481787] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:58.142 06:05:03 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58453 /var/tmp/spdk-nbd.sock 00:09:58.142 06:05:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58453 ']' 00:09:58.142 06:05:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:58.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:58.142 06:05:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.142 06:05:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:58.142 06:05:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.142 06:05:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:58.728 06:05:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.728 06:05:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:58.728 06:05:03 event.app_repeat -- event/event.sh@39 -- # killprocess 58453 00:09:58.728 06:05:03 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58453 ']' 00:09:58.728 06:05:03 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58453 00:09:58.728 06:05:03 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:09:58.728 06:05:03 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.728 06:05:03 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58453 00:09:58.728 killing process with pid 58453 00:09:58.728 06:05:03 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.728 06:05:03 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.728 06:05:03 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58453' 00:09:58.728 06:05:03 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58453 00:09:58.728 06:05:03 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58453 00:09:58.728 spdk_app_start is called in Round 0. 00:09:58.728 Shutdown signal received, stop current app iteration 00:09:58.728 Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 reinitialization... 00:09:58.728 spdk_app_start is called in Round 1. 00:09:58.728 Shutdown signal received, stop current app iteration 00:09:58.728 Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 reinitialization... 00:09:58.728 spdk_app_start is called in Round 2. 00:09:58.728 Shutdown signal received, stop current app iteration 00:09:58.729 Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 reinitialization... 00:09:58.729 spdk_app_start is called in Round 3. 00:09:58.729 Shutdown signal received, stop current app iteration 00:09:58.729 06:05:03 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:58.729 06:05:03 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:58.729 00:09:58.729 real 0m19.875s 00:09:58.729 user 0m45.589s 00:09:58.729 sys 0m3.105s 00:09:58.729 06:05:03 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.729 06:05:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:58.729 ************************************ 00:09:58.729 END TEST app_repeat 00:09:58.729 ************************************ 00:09:58.729 06:05:03 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:58.729 06:05:03 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:58.729 06:05:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:58.729 06:05:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.729 06:05:03 event -- common/autotest_common.sh@10 -- # set +x 00:09:58.729 ************************************ 00:09:58.729 START TEST cpu_locks 00:09:58.729 ************************************ 00:09:58.729 06:05:03 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:59.014 * Looking for test storage... 00:09:59.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:59.014 06:05:03 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:59.014 06:05:03 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:09:59.014 06:05:03 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:59.014 06:05:03 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.014 06:05:03 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.015 06:05:03 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.015 06:05:03 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:59.015 06:05:03 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.015 06:05:03 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:59.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.015 --rc genhtml_branch_coverage=1 00:09:59.015 --rc genhtml_function_coverage=1 00:09:59.015 --rc genhtml_legend=1 00:09:59.015 --rc geninfo_all_blocks=1 00:09:59.015 --rc geninfo_unexecuted_blocks=1 00:09:59.015 00:09:59.015 ' 00:09:59.015 06:05:03 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:59.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.015 --rc genhtml_branch_coverage=1 00:09:59.015 --rc genhtml_function_coverage=1 00:09:59.015 --rc genhtml_legend=1 00:09:59.015 --rc geninfo_all_blocks=1 00:09:59.015 --rc geninfo_unexecuted_blocks=1 00:09:59.015 00:09:59.015 ' 00:09:59.015 06:05:03 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:59.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.015 --rc genhtml_branch_coverage=1 00:09:59.015 --rc genhtml_function_coverage=1 00:09:59.015 --rc genhtml_legend=1 00:09:59.015 --rc geninfo_all_blocks=1 00:09:59.015 --rc geninfo_unexecuted_blocks=1 00:09:59.015 00:09:59.015 ' 00:09:59.015 06:05:03 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:59.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.015 --rc genhtml_branch_coverage=1 00:09:59.015 --rc genhtml_function_coverage=1 00:09:59.015 --rc genhtml_legend=1 00:09:59.015 --rc geninfo_all_blocks=1 00:09:59.015 --rc geninfo_unexecuted_blocks=1 00:09:59.015 00:09:59.015 ' 00:09:59.015 06:05:03 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:59.015 06:05:03 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:59.015 06:05:03 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:59.015 06:05:03 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:59.015 06:05:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:59.015 06:05:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.015 06:05:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:59.015 ************************************ 00:09:59.015 START TEST default_locks 00:09:59.015 ************************************ 00:09:59.015 06:05:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:09:59.015 06:05:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58903 00:09:59.015 06:05:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58903 00:09:59.015 06:05:04 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58903 ']' 00:09:59.015 06:05:04 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.015 06:05:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:59.015 06:05:04 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.015 06:05:04 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.015 06:05:04 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.015 06:05:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:59.015 [2024-11-27 06:05:04.070418] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:09:59.015 [2024-11-27 06:05:04.070529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58903 ] 00:09:59.272 [2024-11-27 06:05:04.221792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.272 [2024-11-27 06:05:04.297914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.530 [2024-11-27 06:05:04.381039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:59.530 06:05:04 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.530 06:05:04 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:09:59.530 06:05:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58903 00:09:59.530 06:05:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58903 00:09:59.530 06:05:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:00.096 06:05:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58903 00:10:00.096 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58903 ']' 00:10:00.096 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58903 00:10:00.096 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:10:00.096 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.096 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58903 00:10:00.096 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.096 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.096 killing process with pid 58903 00:10:00.096 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58903' 00:10:00.096 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58903 00:10:00.096 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58903 00:10:00.662 06:05:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58903 00:10:00.662 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:10:00.662 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58903 00:10:00.662 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:00.662 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.662 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:00.662 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.662 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58903 00:10:00.662 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58903 ']' 00:10:00.662 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.662 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.662 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.662 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.662 ERROR: process (pid: 58903) is no longer running 00:10:00.662 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:00.662 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58903) - No such process 00:10:00.662 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.662 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:10:00.662 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:10:00.662 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:00.662 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:00.662 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:00.663 06:05:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:00.663 06:05:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:00.663 06:05:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:00.663 06:05:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:00.663 00:10:00.663 real 0m1.527s 00:10:00.663 user 0m1.509s 00:10:00.663 sys 0m0.596s 00:10:00.663 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.663 06:05:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:00.663 ************************************ 00:10:00.663 END TEST default_locks 00:10:00.663 ************************************ 00:10:00.663 06:05:05 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:00.663 06:05:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:00.663 06:05:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.663 06:05:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:00.663 ************************************ 00:10:00.663 START TEST default_locks_via_rpc 00:10:00.663 ************************************ 00:10:00.663 06:05:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:10:00.663 06:05:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58947 00:10:00.663 06:05:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58947 00:10:00.663 06:05:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:00.663 06:05:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58947 ']' 00:10:00.663 06:05:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.663 06:05:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.663 06:05:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.663 06:05:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.663 06:05:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.663 [2024-11-27 06:05:05.648082] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:00.663 [2024-11-27 06:05:05.648213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58947 ] 00:10:00.920 [2024-11-27 06:05:05.801815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.920 [2024-11-27 06:05:05.876115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.920 [2024-11-27 06:05:05.955931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:01.178 06:05:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.178 06:05:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:01.178 06:05:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:01.178 06:05:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.178 06:05:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.178 06:05:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.178 06:05:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:01.178 06:05:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:01.178 06:05:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:01.178 06:05:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:01.178 06:05:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:01.178 06:05:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.178 06:05:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.178 06:05:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.178 06:05:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58947 00:10:01.178 06:05:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58947 00:10:01.178 06:05:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:01.743 06:05:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58947 00:10:01.743 06:05:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58947 ']' 00:10:01.743 06:05:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58947 00:10:01.743 06:05:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:10:01.743 06:05:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.743 06:05:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58947 00:10:01.743 06:05:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:01.743 06:05:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:01.743 killing process with pid 58947 00:10:01.743 06:05:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58947' 00:10:01.743 06:05:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58947 00:10:01.743 06:05:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58947 00:10:02.309 00:10:02.309 real 0m1.546s 00:10:02.309 user 0m1.507s 00:10:02.309 sys 0m0.607s 00:10:02.309 06:05:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.309 06:05:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.309 ************************************ 00:10:02.309 END TEST default_locks_via_rpc 00:10:02.309 ************************************ 00:10:02.309 06:05:07 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:02.309 06:05:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:02.309 06:05:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.309 06:05:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:02.309 ************************************ 00:10:02.309 START TEST non_locking_app_on_locked_coremask 00:10:02.309 ************************************ 00:10:02.309 06:05:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:10:02.309 06:05:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58991 00:10:02.309 06:05:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58991 /var/tmp/spdk.sock 00:10:02.309 06:05:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:02.309 06:05:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58991 ']' 00:10:02.309 06:05:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.309 06:05:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.309 06:05:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.309 06:05:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.309 06:05:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:02.309 [2024-11-27 06:05:07.236381] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:02.309 [2024-11-27 06:05:07.236513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58991 ] 00:10:02.309 [2024-11-27 06:05:07.390846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.568 [2024-11-27 06:05:07.473855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.568 [2024-11-27 06:05:07.556683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:02.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:02.827 06:05:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.827 06:05:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:02.827 06:05:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58999 00:10:02.827 06:05:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:02.827 06:05:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58999 /var/tmp/spdk2.sock 00:10:02.827 06:05:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58999 ']' 00:10:02.827 06:05:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:02.827 06:05:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.827 06:05:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:02.827 06:05:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.827 06:05:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:02.827 [2024-11-27 06:05:07.837396] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:02.827 [2024-11-27 06:05:07.837759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58999 ] 00:10:03.085 [2024-11-27 06:05:08.000116] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:03.085 [2024-11-27 06:05:08.000193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.085 [2024-11-27 06:05:08.129166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.380 [2024-11-27 06:05:08.281395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:03.986 06:05:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.986 06:05:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:03.986 06:05:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58991 00:10:03.986 06:05:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58991 00:10:03.986 06:05:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:04.929 06:05:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58991 00:10:04.929 06:05:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58991 ']' 00:10:04.929 06:05:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58991 00:10:04.929 06:05:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:04.929 06:05:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.929 06:05:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58991 00:10:04.929 06:05:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.929 killing process with pid 58991 00:10:04.929 06:05:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.929 06:05:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58991' 00:10:04.929 06:05:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58991 00:10:04.929 06:05:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58991 00:10:05.863 06:05:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58999 00:10:05.863 06:05:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58999 ']' 00:10:05.863 06:05:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58999 00:10:05.863 06:05:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:05.863 06:05:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.863 06:05:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58999 00:10:05.863 killing process with pid 58999 00:10:05.863 06:05:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:05.863 06:05:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:05.863 06:05:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58999' 00:10:05.863 06:05:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58999 00:10:05.863 06:05:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58999 00:10:06.121 00:10:06.121 real 0m3.906s 00:10:06.121 user 0m4.271s 00:10:06.121 sys 0m1.204s 00:10:06.121 06:05:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.121 06:05:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:06.122 ************************************ 00:10:06.122 END TEST non_locking_app_on_locked_coremask 00:10:06.122 ************************************ 00:10:06.122 06:05:11 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:06.122 06:05:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:06.122 06:05:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.122 06:05:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:06.122 ************************************ 00:10:06.122 START TEST locking_app_on_unlocked_coremask 00:10:06.122 ************************************ 00:10:06.122 06:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:10:06.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.122 06:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59072 00:10:06.122 06:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:06.122 06:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59072 /var/tmp/spdk.sock 00:10:06.122 06:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59072 ']' 00:10:06.122 06:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.122 06:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.122 06:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.122 06:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.122 06:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:06.122 [2024-11-27 06:05:11.202319] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:06.122 [2024-11-27 06:05:11.202434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59072 ] 00:10:06.380 [2024-11-27 06:05:11.389611] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:06.380 [2024-11-27 06:05:11.389705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.380 [2024-11-27 06:05:11.460645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.638 [2024-11-27 06:05:11.541637] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:07.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:07.569 06:05:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.569 06:05:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:07.569 06:05:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59088 00:10:07.569 06:05:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:07.569 06:05:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59088 /var/tmp/spdk2.sock 00:10:07.569 06:05:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59088 ']' 00:10:07.569 06:05:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:07.569 06:05:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.569 06:05:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:07.569 06:05:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.569 06:05:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:07.569 [2024-11-27 06:05:12.417173] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:07.569 [2024-11-27 06:05:12.417586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59088 ] 00:10:07.569 [2024-11-27 06:05:12.582743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.827 [2024-11-27 06:05:12.719452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.827 [2024-11-27 06:05:12.877155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:08.759 06:05:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.759 06:05:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:08.759 06:05:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59088 00:10:08.759 06:05:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59088 00:10:08.759 06:05:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:09.692 06:05:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59072 00:10:09.692 06:05:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59072 ']' 00:10:09.692 06:05:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59072 00:10:09.692 06:05:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:09.692 06:05:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.692 06:05:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59072 00:10:09.692 killing process with pid 59072 00:10:09.692 06:05:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.692 06:05:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.692 06:05:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59072' 00:10:09.692 06:05:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59072 00:10:09.692 06:05:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59072 00:10:10.260 06:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59088 00:10:10.260 06:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59088 ']' 00:10:10.260 06:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59088 00:10:10.260 06:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:10.260 06:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.260 06:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59088 00:10:10.518 killing process with pid 59088 00:10:10.518 06:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.518 06:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.518 06:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59088' 00:10:10.518 06:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59088 00:10:10.518 06:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59088 00:10:10.860 ************************************ 00:10:10.860 END TEST locking_app_on_unlocked_coremask 00:10:10.860 ************************************ 00:10:10.860 00:10:10.860 real 0m4.639s 00:10:10.860 user 0m5.382s 00:10:10.860 sys 0m1.299s 00:10:10.860 06:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.860 06:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:10.860 06:05:15 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:10.860 06:05:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:10.860 06:05:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.860 06:05:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:10.860 ************************************ 00:10:10.860 START TEST locking_app_on_locked_coremask 00:10:10.860 ************************************ 00:10:10.860 06:05:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:10:10.860 06:05:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59159 00:10:10.860 06:05:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59159 /var/tmp/spdk.sock 00:10:10.860 06:05:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59159 ']' 00:10:10.860 06:05:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:10.860 06:05:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.860 06:05:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.860 06:05:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.860 06:05:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.860 06:05:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:10.860 [2024-11-27 06:05:15.872031] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:10.860 [2024-11-27 06:05:15.872412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59159 ] 00:10:11.121 [2024-11-27 06:05:16.096774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.121 [2024-11-27 06:05:16.178407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.379 [2024-11-27 06:05:16.280211] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:11.944 06:05:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.944 06:05:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:11.944 06:05:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59176 00:10:11.944 06:05:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59176 /var/tmp/spdk2.sock 00:10:11.944 06:05:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:11.944 06:05:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:11.944 06:05:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59176 /var/tmp/spdk2.sock 00:10:11.944 06:05:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:11.944 06:05:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.944 06:05:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:11.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:11.944 06:05:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.944 06:05:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59176 /var/tmp/spdk2.sock 00:10:11.944 06:05:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59176 ']' 00:10:11.944 06:05:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:11.944 06:05:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.944 06:05:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:11.944 06:05:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.944 06:05:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:11.944 [2024-11-27 06:05:17.011239] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:11.944 [2024-11-27 06:05:17.012187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59176 ] 00:10:12.201 [2024-11-27 06:05:17.178957] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59159 has claimed it. 00:10:12.201 [2024-11-27 06:05:17.179033] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:12.769 ERROR: process (pid: 59176) is no longer running 00:10:12.769 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59176) - No such process 00:10:12.769 06:05:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.769 06:05:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:12.769 06:05:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:12.769 06:05:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:12.769 06:05:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:12.769 06:05:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:12.769 06:05:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59159 00:10:12.769 06:05:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59159 00:10:12.769 06:05:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:13.334 06:05:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59159 00:10:13.334 06:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59159 ']' 00:10:13.334 06:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59159 00:10:13.334 06:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:13.334 06:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.334 06:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59159 00:10:13.334 killing process with pid 59159 00:10:13.334 06:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.335 06:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.335 06:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59159' 00:10:13.335 06:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59159 00:10:13.335 06:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59159 00:10:13.901 00:10:13.901 real 0m2.906s 00:10:13.901 user 0m3.480s 00:10:13.901 sys 0m0.694s 00:10:13.901 06:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.901 ************************************ 00:10:13.901 END TEST locking_app_on_locked_coremask 00:10:13.901 ************************************ 00:10:13.901 06:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:13.901 06:05:18 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:13.901 06:05:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:13.901 06:05:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.901 06:05:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:13.901 ************************************ 00:10:13.901 START TEST locking_overlapped_coremask 00:10:13.901 ************************************ 00:10:13.901 06:05:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:10:13.901 06:05:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59222 00:10:13.901 06:05:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59222 /var/tmp/spdk.sock 00:10:13.901 06:05:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:13.902 06:05:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59222 ']' 00:10:13.902 06:05:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.902 06:05:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.902 06:05:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.902 06:05:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.902 06:05:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:13.902 [2024-11-27 06:05:18.822431] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:13.902 [2024-11-27 06:05:18.822532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59222 ] 00:10:13.902 [2024-11-27 06:05:18.974530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:14.210 [2024-11-27 06:05:19.058875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.210 [2024-11-27 06:05:19.058980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.210 [2024-11-27 06:05:19.059349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.210 [2024-11-27 06:05:19.147174] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:15.142 06:05:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.142 06:05:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:15.142 06:05:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59244 00:10:15.142 06:05:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59244 /var/tmp/spdk2.sock 00:10:15.142 06:05:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:15.142 06:05:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:15.142 06:05:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59244 /var/tmp/spdk2.sock 00:10:15.142 06:05:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:15.142 06:05:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:15.142 06:05:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:15.142 06:05:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:15.142 06:05:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59244 /var/tmp/spdk2.sock 00:10:15.142 06:05:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59244 ']' 00:10:15.142 06:05:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:15.142 06:05:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.142 06:05:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:15.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:15.142 06:05:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.142 06:05:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:15.142 [2024-11-27 06:05:20.064269] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:15.142 [2024-11-27 06:05:20.064966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59244 ] 00:10:15.142 [2024-11-27 06:05:20.235895] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59222 has claimed it. 00:10:15.400 [2024-11-27 06:05:20.240219] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:15.967 ERROR: process (pid: 59244) is no longer running 00:10:15.967 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59244) - No such process 00:10:15.967 06:05:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.967 06:05:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:15.967 06:05:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:15.967 06:05:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:15.967 06:05:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:15.967 06:05:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:15.967 06:05:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:15.967 06:05:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:15.967 06:05:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:15.967 06:05:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:15.967 06:05:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59222 00:10:15.967 06:05:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59222 ']' 00:10:15.967 06:05:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59222 00:10:15.967 06:05:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:10:15.967 06:05:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:15.967 06:05:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59222 00:10:15.967 06:05:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:15.967 06:05:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:15.967 06:05:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59222' 00:10:15.967 killing process with pid 59222 00:10:15.967 06:05:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59222 00:10:15.967 06:05:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59222 00:10:16.226 00:10:16.226 real 0m2.499s 00:10:16.226 user 0m7.247s 00:10:16.226 sys 0m0.451s 00:10:16.226 06:05:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.226 06:05:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:16.226 ************************************ 00:10:16.226 END TEST locking_overlapped_coremask 00:10:16.226 ************************************ 00:10:16.226 06:05:21 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:16.226 06:05:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:16.226 06:05:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.226 06:05:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:16.226 ************************************ 00:10:16.226 START TEST locking_overlapped_coremask_via_rpc 00:10:16.226 ************************************ 00:10:16.226 06:05:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:10:16.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.226 06:05:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59291 00:10:16.226 06:05:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59291 /var/tmp/spdk.sock 00:10:16.226 06:05:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:16.226 06:05:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59291 ']' 00:10:16.226 06:05:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.226 06:05:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.226 06:05:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.226 06:05:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.226 06:05:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.485 [2024-11-27 06:05:21.383293] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:16.485 [2024-11-27 06:05:21.383411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59291 ] 00:10:16.485 [2024-11-27 06:05:21.535613] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:16.485 [2024-11-27 06:05:21.535712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:16.744 [2024-11-27 06:05:21.616262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.744 [2024-11-27 06:05:21.616429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.744 [2024-11-27 06:05:21.616429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.744 [2024-11-27 06:05:21.701889] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:17.679 06:05:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.679 06:05:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:17.679 06:05:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59309 00:10:17.679 06:05:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:17.679 06:05:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59309 /var/tmp/spdk2.sock 00:10:17.679 06:05:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59309 ']' 00:10:17.679 06:05:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:17.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:17.679 06:05:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.679 06:05:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:17.679 06:05:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.679 06:05:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.679 [2024-11-27 06:05:22.510215] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:17.679 [2024-11-27 06:05:22.510785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59309 ] 00:10:17.679 [2024-11-27 06:05:22.678218] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:17.679 [2024-11-27 06:05:22.682150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:17.937 [2024-11-27 06:05:22.866103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:17.937 [2024-11-27 06:05:22.866196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:17.937 [2024-11-27 06:05:22.866199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.195 [2024-11-27 06:05:23.094544] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.760 [2024-11-27 06:05:23.667511] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59291 has claimed it. 00:10:18.760 request: 00:10:18.760 { 00:10:18.760 "method": "framework_enable_cpumask_locks", 00:10:18.760 "req_id": 1 00:10:18.760 } 00:10:18.760 Got JSON-RPC error response 00:10:18.760 response: 00:10:18.760 { 00:10:18.760 "code": -32603, 00:10:18.760 "message": "Failed to claim CPU core: 2" 00:10:18.760 } 00:10:18.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59291 /var/tmp/spdk.sock 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59291 ']' 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.760 06:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.019 06:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.019 06:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:19.019 06:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59309 /var/tmp/spdk2.sock 00:10:19.019 06:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59309 ']' 00:10:19.019 06:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:19.019 06:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.019 06:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:19.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:19.019 06:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.019 06:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.278 06:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.278 06:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:19.278 06:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:19.278 06:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:19.278 06:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:19.278 ************************************ 00:10:19.278 END TEST locking_overlapped_coremask_via_rpc 00:10:19.278 ************************************ 00:10:19.279 06:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:19.279 00:10:19.279 real 0m3.060s 00:10:19.279 user 0m1.722s 00:10:19.279 sys 0m0.248s 00:10:19.279 06:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.279 06:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.537 06:05:24 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:19.537 06:05:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59291 ]] 00:10:19.537 06:05:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59291 00:10:19.537 06:05:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59291 ']' 00:10:19.537 06:05:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59291 00:10:19.537 06:05:24 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:19.537 06:05:24 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.537 06:05:24 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59291 00:10:19.537 killing process with pid 59291 00:10:19.537 06:05:24 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.537 06:05:24 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.537 06:05:24 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59291' 00:10:19.537 06:05:24 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59291 00:10:19.537 06:05:24 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59291 00:10:19.795 06:05:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59309 ]] 00:10:19.795 06:05:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59309 00:10:19.795 06:05:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59309 ']' 00:10:19.795 06:05:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59309 00:10:19.795 06:05:24 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:19.795 06:05:24 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.795 06:05:24 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59309 00:10:19.795 killing process with pid 59309 00:10:19.795 06:05:24 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:19.795 06:05:24 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:19.795 06:05:24 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59309' 00:10:19.795 06:05:24 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59309 00:10:19.795 06:05:24 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59309 00:10:20.732 06:05:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:20.732 06:05:25 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:20.732 06:05:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59291 ]] 00:10:20.733 06:05:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59291 00:10:20.733 06:05:25 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59291 ']' 00:10:20.733 06:05:25 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59291 00:10:20.733 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59291) - No such process 00:10:20.733 Process with pid 59291 is not found 00:10:20.733 06:05:25 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59291 is not found' 00:10:20.733 06:05:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59309 ]] 00:10:20.733 06:05:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59309 00:10:20.733 06:05:25 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59309 ']' 00:10:20.733 Process with pid 59309 is not found 00:10:20.733 06:05:25 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59309 00:10:20.733 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59309) - No such process 00:10:20.733 06:05:25 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59309 is not found' 00:10:20.733 06:05:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:20.733 00:10:20.733 real 0m21.700s 00:10:20.733 user 0m40.074s 00:10:20.733 sys 0m6.151s 00:10:20.733 06:05:25 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.733 06:05:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:20.733 ************************************ 00:10:20.733 END TEST cpu_locks 00:10:20.733 ************************************ 00:10:20.733 ************************************ 00:10:20.733 END TEST event 00:10:20.733 ************************************ 00:10:20.733 00:10:20.733 real 0m50.506s 00:10:20.733 user 1m39.611s 00:10:20.733 sys 0m10.071s 00:10:20.733 06:05:25 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.733 06:05:25 event -- common/autotest_common.sh@10 -- # set +x 00:10:20.733 06:05:25 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:20.733 06:05:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:20.733 06:05:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.733 06:05:25 -- common/autotest_common.sh@10 -- # set +x 00:10:20.733 ************************************ 00:10:20.733 START TEST thread 00:10:20.733 ************************************ 00:10:20.733 06:05:25 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:20.733 * Looking for test storage... 00:10:20.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:20.733 06:05:25 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:20.733 06:05:25 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:20.733 06:05:25 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:10:20.733 06:05:25 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:20.733 06:05:25 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.733 06:05:25 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.733 06:05:25 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.733 06:05:25 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.733 06:05:25 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.733 06:05:25 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.733 06:05:25 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.733 06:05:25 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.733 06:05:25 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.733 06:05:25 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.733 06:05:25 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.733 06:05:25 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:20.733 06:05:25 thread -- scripts/common.sh@345 -- # : 1 00:10:20.733 06:05:25 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.733 06:05:25 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.733 06:05:25 thread -- scripts/common.sh@365 -- # decimal 1 00:10:20.733 06:05:25 thread -- scripts/common.sh@353 -- # local d=1 00:10:20.733 06:05:25 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.733 06:05:25 thread -- scripts/common.sh@355 -- # echo 1 00:10:20.733 06:05:25 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.733 06:05:25 thread -- scripts/common.sh@366 -- # decimal 2 00:10:20.733 06:05:25 thread -- scripts/common.sh@353 -- # local d=2 00:10:20.733 06:05:25 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.733 06:05:25 thread -- scripts/common.sh@355 -- # echo 2 00:10:20.733 06:05:25 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.733 06:05:25 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.733 06:05:25 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.733 06:05:25 thread -- scripts/common.sh@368 -- # return 0 00:10:20.733 06:05:25 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.733 06:05:25 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:20.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.733 --rc genhtml_branch_coverage=1 00:10:20.733 --rc genhtml_function_coverage=1 00:10:20.733 --rc genhtml_legend=1 00:10:20.733 --rc geninfo_all_blocks=1 00:10:20.733 --rc geninfo_unexecuted_blocks=1 00:10:20.733 00:10:20.733 ' 00:10:20.733 06:05:25 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:20.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.733 --rc genhtml_branch_coverage=1 00:10:20.733 --rc genhtml_function_coverage=1 00:10:20.733 --rc genhtml_legend=1 00:10:20.733 --rc geninfo_all_blocks=1 00:10:20.733 --rc geninfo_unexecuted_blocks=1 00:10:20.733 00:10:20.733 ' 00:10:20.733 06:05:25 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:20.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.733 --rc genhtml_branch_coverage=1 00:10:20.733 --rc genhtml_function_coverage=1 00:10:20.733 --rc genhtml_legend=1 00:10:20.733 --rc geninfo_all_blocks=1 00:10:20.733 --rc geninfo_unexecuted_blocks=1 00:10:20.733 00:10:20.733 ' 00:10:20.733 06:05:25 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:20.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.733 --rc genhtml_branch_coverage=1 00:10:20.733 --rc genhtml_function_coverage=1 00:10:20.733 --rc genhtml_legend=1 00:10:20.733 --rc geninfo_all_blocks=1 00:10:20.733 --rc geninfo_unexecuted_blocks=1 00:10:20.733 00:10:20.733 ' 00:10:20.733 06:05:25 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:20.733 06:05:25 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:20.733 06:05:25 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.733 06:05:25 thread -- common/autotest_common.sh@10 -- # set +x 00:10:20.733 ************************************ 00:10:20.733 START TEST thread_poller_perf 00:10:20.733 ************************************ 00:10:20.733 06:05:25 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:20.733 [2024-11-27 06:05:25.826354] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:20.733 [2024-11-27 06:05:25.826628] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59445 ] 00:10:20.992 [2024-11-27 06:05:25.970619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.992 [2024-11-27 06:05:26.037467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.992 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:22.370 [2024-11-27T06:05:27.467Z] ====================================== 00:10:22.370 [2024-11-27T06:05:27.467Z] busy:2210195959 (cyc) 00:10:22.370 [2024-11-27T06:05:27.467Z] total_run_count: 312000 00:10:22.370 [2024-11-27T06:05:27.467Z] tsc_hz: 2200000000 (cyc) 00:10:22.370 [2024-11-27T06:05:27.467Z] ====================================== 00:10:22.370 [2024-11-27T06:05:27.467Z] poller_cost: 7083 (cyc), 3219 (nsec) 00:10:22.370 00:10:22.370 ************************************ 00:10:22.370 END TEST thread_poller_perf 00:10:22.370 ************************************ 00:10:22.370 real 0m1.287s 00:10:22.370 user 0m1.135s 00:10:22.370 sys 0m0.043s 00:10:22.370 06:05:27 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.370 06:05:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:22.370 06:05:27 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:22.370 06:05:27 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:22.370 06:05:27 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.370 06:05:27 thread -- common/autotest_common.sh@10 -- # set +x 00:10:22.370 ************************************ 00:10:22.370 START TEST thread_poller_perf 00:10:22.370 ************************************ 00:10:22.370 06:05:27 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:22.370 [2024-11-27 06:05:27.167848] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:22.371 [2024-11-27 06:05:27.167954] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59475 ] 00:10:22.371 [2024-11-27 06:05:27.311468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.371 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:22.371 [2024-11-27 06:05:27.374667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.746 [2024-11-27T06:05:28.843Z] ====================================== 00:10:23.746 [2024-11-27T06:05:28.843Z] busy:2201793317 (cyc) 00:10:23.746 [2024-11-27T06:05:28.843Z] total_run_count: 4114000 00:10:23.746 [2024-11-27T06:05:28.843Z] tsc_hz: 2200000000 (cyc) 00:10:23.746 [2024-11-27T06:05:28.843Z] ====================================== 00:10:23.746 [2024-11-27T06:05:28.843Z] poller_cost: 535 (cyc), 243 (nsec) 00:10:23.746 00:10:23.746 real 0m1.279s 00:10:23.746 user 0m1.125s 00:10:23.746 sys 0m0.047s 00:10:23.746 ************************************ 00:10:23.746 END TEST thread_poller_perf 00:10:23.746 ************************************ 00:10:23.746 06:05:28 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.746 06:05:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:23.746 06:05:28 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:23.746 ************************************ 00:10:23.746 END TEST thread 00:10:23.746 ************************************ 00:10:23.746 00:10:23.746 real 0m2.870s 00:10:23.746 user 0m2.405s 00:10:23.746 sys 0m0.247s 00:10:23.746 06:05:28 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.746 06:05:28 thread -- common/autotest_common.sh@10 -- # set +x 00:10:23.746 06:05:28 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:23.746 06:05:28 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:23.746 06:05:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:23.746 06:05:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.746 06:05:28 -- common/autotest_common.sh@10 -- # set +x 00:10:23.746 ************************************ 00:10:23.746 START TEST app_cmdline 00:10:23.746 ************************************ 00:10:23.746 06:05:28 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:23.746 * Looking for test storage... 00:10:23.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:23.746 06:05:28 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:23.746 06:05:28 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:23.746 06:05:28 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:10:23.746 06:05:28 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:23.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.746 06:05:28 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:23.746 06:05:28 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.746 06:05:28 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:23.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.746 --rc genhtml_branch_coverage=1 00:10:23.746 --rc genhtml_function_coverage=1 00:10:23.746 --rc genhtml_legend=1 00:10:23.746 --rc geninfo_all_blocks=1 00:10:23.746 --rc geninfo_unexecuted_blocks=1 00:10:23.746 00:10:23.746 ' 00:10:23.746 06:05:28 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:23.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.746 --rc genhtml_branch_coverage=1 00:10:23.746 --rc genhtml_function_coverage=1 00:10:23.746 --rc genhtml_legend=1 00:10:23.746 --rc geninfo_all_blocks=1 00:10:23.746 --rc geninfo_unexecuted_blocks=1 00:10:23.746 00:10:23.746 ' 00:10:23.746 06:05:28 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:23.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.746 --rc genhtml_branch_coverage=1 00:10:23.746 --rc genhtml_function_coverage=1 00:10:23.746 --rc genhtml_legend=1 00:10:23.746 --rc geninfo_all_blocks=1 00:10:23.746 --rc geninfo_unexecuted_blocks=1 00:10:23.746 00:10:23.746 ' 00:10:23.746 06:05:28 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:23.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.746 --rc genhtml_branch_coverage=1 00:10:23.746 --rc genhtml_function_coverage=1 00:10:23.746 --rc genhtml_legend=1 00:10:23.746 --rc geninfo_all_blocks=1 00:10:23.746 --rc geninfo_unexecuted_blocks=1 00:10:23.746 00:10:23.746 ' 00:10:23.746 06:05:28 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:23.746 06:05:28 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59565 00:10:23.746 06:05:28 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59565 00:10:23.746 06:05:28 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59565 ']' 00:10:23.746 06:05:28 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:23.746 06:05:28 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.746 06:05:28 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.746 06:05:28 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.746 06:05:28 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.746 06:05:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:23.746 [2024-11-27 06:05:28.780828] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:23.746 [2024-11-27 06:05:28.781171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59565 ] 00:10:24.004 [2024-11-27 06:05:28.925390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.004 [2024-11-27 06:05:28.994714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.004 [2024-11-27 06:05:29.070717] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:24.275 06:05:29 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.275 06:05:29 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:10:24.275 06:05:29 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:24.533 { 00:10:24.533 "version": "SPDK v25.01-pre git sha1 345c51d49", 00:10:24.533 "fields": { 00:10:24.533 "major": 25, 00:10:24.533 "minor": 1, 00:10:24.533 "patch": 0, 00:10:24.533 "suffix": "-pre", 00:10:24.533 "commit": "345c51d49" 00:10:24.533 } 00:10:24.533 } 00:10:24.533 06:05:29 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:24.533 06:05:29 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:24.533 06:05:29 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:24.533 06:05:29 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:24.533 06:05:29 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:24.533 06:05:29 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.533 06:05:29 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:24.533 06:05:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:24.533 06:05:29 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:24.533 06:05:29 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.792 06:05:29 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:24.792 06:05:29 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:24.792 06:05:29 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:24.792 06:05:29 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:10:24.792 06:05:29 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:24.792 06:05:29 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:24.792 06:05:29 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:24.792 06:05:29 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:24.792 06:05:29 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:24.792 06:05:29 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:24.792 06:05:29 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:24.792 06:05:29 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:24.792 06:05:29 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:24.792 06:05:29 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:25.053 request: 00:10:25.053 { 00:10:25.053 "method": "env_dpdk_get_mem_stats", 00:10:25.053 "req_id": 1 00:10:25.053 } 00:10:25.053 Got JSON-RPC error response 00:10:25.053 response: 00:10:25.053 { 00:10:25.053 "code": -32601, 00:10:25.053 "message": "Method not found" 00:10:25.053 } 00:10:25.053 06:05:30 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:10:25.053 06:05:30 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:25.053 06:05:30 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:25.053 06:05:30 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:25.053 06:05:30 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59565 00:10:25.053 06:05:30 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59565 ']' 00:10:25.053 06:05:30 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59565 00:10:25.053 06:05:30 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:10:25.053 06:05:30 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.053 06:05:30 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59565 00:10:25.053 killing process with pid 59565 00:10:25.053 06:05:30 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.053 06:05:30 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.053 06:05:30 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59565' 00:10:25.053 06:05:30 app_cmdline -- common/autotest_common.sh@973 -- # kill 59565 00:10:25.053 06:05:30 app_cmdline -- common/autotest_common.sh@978 -- # wait 59565 00:10:25.618 00:10:25.618 real 0m1.933s 00:10:25.618 user 0m2.441s 00:10:25.618 sys 0m0.481s 00:10:25.618 06:05:30 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.618 06:05:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:25.618 ************************************ 00:10:25.618 END TEST app_cmdline 00:10:25.618 ************************************ 00:10:25.618 06:05:30 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:25.618 06:05:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:25.618 06:05:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.618 06:05:30 -- common/autotest_common.sh@10 -- # set +x 00:10:25.618 ************************************ 00:10:25.618 START TEST version 00:10:25.618 ************************************ 00:10:25.618 06:05:30 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:25.618 * Looking for test storage... 00:10:25.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:25.618 06:05:30 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:25.618 06:05:30 version -- common/autotest_common.sh@1693 -- # lcov --version 00:10:25.618 06:05:30 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:25.618 06:05:30 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:25.618 06:05:30 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:25.618 06:05:30 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:25.618 06:05:30 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:25.618 06:05:30 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.618 06:05:30 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:25.618 06:05:30 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:25.618 06:05:30 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:25.618 06:05:30 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:25.618 06:05:30 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:25.618 06:05:30 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:25.618 06:05:30 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:25.618 06:05:30 version -- scripts/common.sh@344 -- # case "$op" in 00:10:25.618 06:05:30 version -- scripts/common.sh@345 -- # : 1 00:10:25.618 06:05:30 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:25.618 06:05:30 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.618 06:05:30 version -- scripts/common.sh@365 -- # decimal 1 00:10:25.618 06:05:30 version -- scripts/common.sh@353 -- # local d=1 00:10:25.618 06:05:30 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.618 06:05:30 version -- scripts/common.sh@355 -- # echo 1 00:10:25.618 06:05:30 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:25.618 06:05:30 version -- scripts/common.sh@366 -- # decimal 2 00:10:25.618 06:05:30 version -- scripts/common.sh@353 -- # local d=2 00:10:25.618 06:05:30 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.618 06:05:30 version -- scripts/common.sh@355 -- # echo 2 00:10:25.618 06:05:30 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:25.618 06:05:30 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:25.618 06:05:30 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:25.618 06:05:30 version -- scripts/common.sh@368 -- # return 0 00:10:25.618 06:05:30 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.618 06:05:30 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:25.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.618 --rc genhtml_branch_coverage=1 00:10:25.618 --rc genhtml_function_coverage=1 00:10:25.618 --rc genhtml_legend=1 00:10:25.618 --rc geninfo_all_blocks=1 00:10:25.618 --rc geninfo_unexecuted_blocks=1 00:10:25.618 00:10:25.618 ' 00:10:25.618 06:05:30 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:25.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.618 --rc genhtml_branch_coverage=1 00:10:25.618 --rc genhtml_function_coverage=1 00:10:25.618 --rc genhtml_legend=1 00:10:25.618 --rc geninfo_all_blocks=1 00:10:25.618 --rc geninfo_unexecuted_blocks=1 00:10:25.618 00:10:25.618 ' 00:10:25.618 06:05:30 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:25.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.618 --rc genhtml_branch_coverage=1 00:10:25.618 --rc genhtml_function_coverage=1 00:10:25.618 --rc genhtml_legend=1 00:10:25.618 --rc geninfo_all_blocks=1 00:10:25.618 --rc geninfo_unexecuted_blocks=1 00:10:25.618 00:10:25.618 ' 00:10:25.618 06:05:30 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:25.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.618 --rc genhtml_branch_coverage=1 00:10:25.618 --rc genhtml_function_coverage=1 00:10:25.618 --rc genhtml_legend=1 00:10:25.618 --rc geninfo_all_blocks=1 00:10:25.618 --rc geninfo_unexecuted_blocks=1 00:10:25.618 00:10:25.618 ' 00:10:25.618 06:05:30 version -- app/version.sh@17 -- # get_header_version major 00:10:25.618 06:05:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:25.618 06:05:30 version -- app/version.sh@14 -- # cut -f2 00:10:25.619 06:05:30 version -- app/version.sh@14 -- # tr -d '"' 00:10:25.619 06:05:30 version -- app/version.sh@17 -- # major=25 00:10:25.619 06:05:30 version -- app/version.sh@18 -- # get_header_version minor 00:10:25.619 06:05:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:25.619 06:05:30 version -- app/version.sh@14 -- # tr -d '"' 00:10:25.619 06:05:30 version -- app/version.sh@14 -- # cut -f2 00:10:25.619 06:05:30 version -- app/version.sh@18 -- # minor=1 00:10:25.619 06:05:30 version -- app/version.sh@19 -- # get_header_version patch 00:10:25.619 06:05:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:25.619 06:05:30 version -- app/version.sh@14 -- # tr -d '"' 00:10:25.619 06:05:30 version -- app/version.sh@14 -- # cut -f2 00:10:25.619 06:05:30 version -- app/version.sh@19 -- # patch=0 00:10:25.619 06:05:30 version -- app/version.sh@20 -- # get_header_version suffix 00:10:25.619 06:05:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:25.619 06:05:30 version -- app/version.sh@14 -- # cut -f2 00:10:25.619 06:05:30 version -- app/version.sh@14 -- # tr -d '"' 00:10:25.619 06:05:30 version -- app/version.sh@20 -- # suffix=-pre 00:10:25.619 06:05:30 version -- app/version.sh@22 -- # version=25.1 00:10:25.619 06:05:30 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:25.619 06:05:30 version -- app/version.sh@28 -- # version=25.1rc0 00:10:25.619 06:05:30 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:25.619 06:05:30 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:25.876 06:05:30 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:25.876 06:05:30 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:25.876 ************************************ 00:10:25.876 END TEST version 00:10:25.876 ************************************ 00:10:25.876 00:10:25.876 real 0m0.256s 00:10:25.876 user 0m0.166s 00:10:25.876 sys 0m0.127s 00:10:25.876 06:05:30 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.876 06:05:30 version -- common/autotest_common.sh@10 -- # set +x 00:10:25.876 06:05:30 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:25.876 06:05:30 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:10:25.876 06:05:30 -- spdk/autotest.sh@194 -- # uname -s 00:10:25.876 06:05:30 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:25.876 06:05:30 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:25.876 06:05:30 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:10:25.876 06:05:30 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:10:25.876 06:05:30 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:10:25.876 06:05:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:25.876 06:05:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.876 06:05:30 -- common/autotest_common.sh@10 -- # set +x 00:10:25.876 ************************************ 00:10:25.876 START TEST spdk_dd 00:10:25.876 ************************************ 00:10:25.876 06:05:30 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:10:25.876 * Looking for test storage... 00:10:25.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:25.876 06:05:30 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:25.876 06:05:30 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:10:25.876 06:05:30 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:26.135 06:05:30 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@345 -- # : 1 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@368 -- # return 0 00:10:26.135 06:05:30 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.135 06:05:30 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:26.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.135 --rc genhtml_branch_coverage=1 00:10:26.135 --rc genhtml_function_coverage=1 00:10:26.135 --rc genhtml_legend=1 00:10:26.135 --rc geninfo_all_blocks=1 00:10:26.135 --rc geninfo_unexecuted_blocks=1 00:10:26.135 00:10:26.135 ' 00:10:26.135 06:05:30 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:26.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.135 --rc genhtml_branch_coverage=1 00:10:26.135 --rc genhtml_function_coverage=1 00:10:26.135 --rc genhtml_legend=1 00:10:26.135 --rc geninfo_all_blocks=1 00:10:26.135 --rc geninfo_unexecuted_blocks=1 00:10:26.135 00:10:26.135 ' 00:10:26.135 06:05:30 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:26.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.135 --rc genhtml_branch_coverage=1 00:10:26.135 --rc genhtml_function_coverage=1 00:10:26.135 --rc genhtml_legend=1 00:10:26.135 --rc geninfo_all_blocks=1 00:10:26.135 --rc geninfo_unexecuted_blocks=1 00:10:26.135 00:10:26.135 ' 00:10:26.135 06:05:30 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:26.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.135 --rc genhtml_branch_coverage=1 00:10:26.135 --rc genhtml_function_coverage=1 00:10:26.135 --rc genhtml_legend=1 00:10:26.135 --rc geninfo_all_blocks=1 00:10:26.135 --rc geninfo_unexecuted_blocks=1 00:10:26.135 00:10:26.135 ' 00:10:26.135 06:05:30 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.135 06:05:30 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.135 06:05:30 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.135 06:05:30 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.135 06:05:30 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.135 06:05:30 spdk_dd -- paths/export.sh@5 -- # export PATH 00:10:26.135 06:05:30 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.135 06:05:30 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:26.396 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:26.396 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:26.396 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:26.396 06:05:31 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:10:26.396 06:05:31 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@233 -- # local class 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@235 -- # local progif 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@236 -- # class=01 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@18 -- # local i 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@27 -- # return 0 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@18 -- # local i 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@27 -- # return 0 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:10:26.396 06:05:31 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:10:26.397 06:05:31 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@139 -- # local lib 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:10:26.397 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:10:26.398 * spdk_dd linked to liburing 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:10:26.398 06:05:31 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:26.398 06:05:31 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:26.399 06:05:31 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:26.399 06:05:31 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:26.399 06:05:31 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:26.399 06:05:31 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:26.399 06:05:31 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:26.399 06:05:31 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:26.399 06:05:31 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:26.399 06:05:31 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:26.399 06:05:31 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:26.399 06:05:31 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:26.399 06:05:31 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:10:26.399 06:05:31 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:10:26.399 06:05:31 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:10:26.399 06:05:31 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:10:26.399 06:05:31 spdk_dd -- dd/common.sh@153 -- # return 0 00:10:26.399 06:05:31 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:10:26.399 06:05:31 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:10:26.399 06:05:31 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:26.399 06:05:31 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.399 06:05:31 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:26.399 ************************************ 00:10:26.399 START TEST spdk_dd_basic_rw 00:10:26.399 ************************************ 00:10:26.399 06:05:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:10:26.657 * Looking for test storage... 00:10:26.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:26.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.658 --rc genhtml_branch_coverage=1 00:10:26.658 --rc genhtml_function_coverage=1 00:10:26.658 --rc genhtml_legend=1 00:10:26.658 --rc geninfo_all_blocks=1 00:10:26.658 --rc geninfo_unexecuted_blocks=1 00:10:26.658 00:10:26.658 ' 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:26.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.658 --rc genhtml_branch_coverage=1 00:10:26.658 --rc genhtml_function_coverage=1 00:10:26.658 --rc genhtml_legend=1 00:10:26.658 --rc geninfo_all_blocks=1 00:10:26.658 --rc geninfo_unexecuted_blocks=1 00:10:26.658 00:10:26.658 ' 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:26.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.658 --rc genhtml_branch_coverage=1 00:10:26.658 --rc genhtml_function_coverage=1 00:10:26.658 --rc genhtml_legend=1 00:10:26.658 --rc geninfo_all_blocks=1 00:10:26.658 --rc geninfo_unexecuted_blocks=1 00:10:26.658 00:10:26.658 ' 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:26.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.658 --rc genhtml_branch_coverage=1 00:10:26.658 --rc genhtml_function_coverage=1 00:10:26.658 --rc genhtml_legend=1 00:10:26.658 --rc geninfo_all_blocks=1 00:10:26.658 --rc geninfo_unexecuted_blocks=1 00:10:26.658 00:10:26.658 ' 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:10:26.658 06:05:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:10:26.919 06:05:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:10:26.919 06:05:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:10:26.920 06:05:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:10:26.920 06:05:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:10:26.920 06:05:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:10:26.920 06:05:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:10:26.920 06:05:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:10:26.920 06:05:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:26.920 06:05:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.920 06:05:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:26.920 06:05:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:10:26.920 06:05:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:10:26.920 06:05:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:26.920 06:05:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:26.920 ************************************ 00:10:26.920 START TEST dd_bs_lt_native_bs 00:10:26.920 ************************************ 00:10:26.920 06:05:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:10:26.920 06:05:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:10:26.920 06:05:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:10:26.920 06:05:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:26.920 06:05:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:26.920 06:05:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:26.920 06:05:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:26.920 06:05:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:26.920 06:05:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:26.920 06:05:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:26.920 06:05:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:26.920 06:05:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:10:26.920 { 00:10:26.920 "subsystems": [ 00:10:26.920 { 00:10:26.920 "subsystem": "bdev", 00:10:26.920 "config": [ 00:10:26.920 { 00:10:26.920 "params": { 00:10:26.920 "trtype": "pcie", 00:10:26.920 "traddr": "0000:00:10.0", 00:10:26.920 "name": "Nvme0" 00:10:26.920 }, 00:10:26.920 "method": "bdev_nvme_attach_controller" 00:10:26.920 }, 00:10:26.920 { 00:10:26.920 "method": "bdev_wait_for_examine" 00:10:26.920 } 00:10:26.920 ] 00:10:26.920 } 00:10:26.920 ] 00:10:26.920 } 00:10:26.920 [2024-11-27 06:05:31.938078] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:26.920 [2024-11-27 06:05:31.938226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59909 ] 00:10:27.178 [2024-11-27 06:05:32.096776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.178 [2024-11-27 06:05:32.186680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.178 [2024-11-27 06:05:32.250137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:27.436 [2024-11-27 06:05:32.368194] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:10:27.437 [2024-11-27 06:05:32.368262] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:27.437 [2024-11-27 06:05:32.506459] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:27.695 00:10:27.695 real 0m0.704s 00:10:27.695 user 0m0.485s 00:10:27.695 sys 0m0.173s 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.695 ************************************ 00:10:27.695 END TEST dd_bs_lt_native_bs 00:10:27.695 ************************************ 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:27.695 ************************************ 00:10:27.695 START TEST dd_rw 00:10:27.695 ************************************ 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:27.695 06:05:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:28.262 06:05:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:10:28.262 06:05:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:28.262 06:05:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:28.262 06:05:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:28.520 { 00:10:28.520 "subsystems": [ 00:10:28.520 { 00:10:28.520 "subsystem": "bdev", 00:10:28.520 "config": [ 00:10:28.520 { 00:10:28.520 "params": { 00:10:28.520 "trtype": "pcie", 00:10:28.520 "traddr": "0000:00:10.0", 00:10:28.520 "name": "Nvme0" 00:10:28.520 }, 00:10:28.520 "method": "bdev_nvme_attach_controller" 00:10:28.520 }, 00:10:28.520 { 00:10:28.520 "method": "bdev_wait_for_examine" 00:10:28.520 } 00:10:28.520 ] 00:10:28.520 } 00:10:28.520 ] 00:10:28.520 } 00:10:28.520 [2024-11-27 06:05:33.388099] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:28.520 [2024-11-27 06:05:33.388200] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59940 ] 00:10:28.520 [2024-11-27 06:05:33.537440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.778 [2024-11-27 06:05:33.616445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.778 [2024-11-27 06:05:33.679939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:28.778  [2024-11-27T06:05:34.133Z] Copying: 60/60 [kB] (average 29 MBps) 00:10:29.036 00:10:29.036 06:05:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:10:29.036 06:05:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:29.036 06:05:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:29.036 06:05:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:29.036 { 00:10:29.036 "subsystems": [ 00:10:29.036 { 00:10:29.036 "subsystem": "bdev", 00:10:29.036 "config": [ 00:10:29.036 { 00:10:29.036 "params": { 00:10:29.036 "trtype": "pcie", 00:10:29.036 "traddr": "0000:00:10.0", 00:10:29.036 "name": "Nvme0" 00:10:29.036 }, 00:10:29.036 "method": "bdev_nvme_attach_controller" 00:10:29.036 }, 00:10:29.036 { 00:10:29.036 "method": "bdev_wait_for_examine" 00:10:29.036 } 00:10:29.036 ] 00:10:29.036 } 00:10:29.036 ] 00:10:29.036 } 00:10:29.036 [2024-11-27 06:05:34.070174] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:29.037 [2024-11-27 06:05:34.070270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59959 ] 00:10:29.325 [2024-11-27 06:05:34.218599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.325 [2024-11-27 06:05:34.284933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.325 [2024-11-27 06:05:34.343705] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:29.584  [2024-11-27T06:05:34.681Z] Copying: 60/60 [kB] (average 19 MBps) 00:10:29.584 00:10:29.584 06:05:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:29.584 06:05:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:10:29.584 06:05:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:29.584 06:05:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:29.584 06:05:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:10:29.584 06:05:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:29.584 06:05:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:29.584 06:05:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:29.584 06:05:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:29.584 06:05:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:29.584 06:05:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:29.842 [2024-11-27 06:05:34.726193] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:29.843 [2024-11-27 06:05:34.726306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59980 ] 00:10:29.843 { 00:10:29.843 "subsystems": [ 00:10:29.843 { 00:10:29.843 "subsystem": "bdev", 00:10:29.843 "config": [ 00:10:29.843 { 00:10:29.843 "params": { 00:10:29.843 "trtype": "pcie", 00:10:29.843 "traddr": "0000:00:10.0", 00:10:29.843 "name": "Nvme0" 00:10:29.843 }, 00:10:29.843 "method": "bdev_nvme_attach_controller" 00:10:29.843 }, 00:10:29.843 { 00:10:29.843 "method": "bdev_wait_for_examine" 00:10:29.843 } 00:10:29.843 ] 00:10:29.843 } 00:10:29.843 ] 00:10:29.843 } 00:10:29.843 [2024-11-27 06:05:34.878847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.101 [2024-11-27 06:05:34.947493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.101 [2024-11-27 06:05:35.011314] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:30.101  [2024-11-27T06:05:35.472Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:10:30.375 00:10:30.375 06:05:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:30.375 06:05:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:10:30.376 06:05:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:10:30.376 06:05:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:10:30.376 06:05:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:10:30.376 06:05:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:30.376 06:05:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:31.339 06:05:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:10:31.339 06:05:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:31.339 06:05:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:31.339 06:05:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:31.339 { 00:10:31.339 "subsystems": [ 00:10:31.339 { 00:10:31.339 "subsystem": "bdev", 00:10:31.339 "config": [ 00:10:31.339 { 00:10:31.339 "params": { 00:10:31.339 "trtype": "pcie", 00:10:31.339 "traddr": "0000:00:10.0", 00:10:31.339 "name": "Nvme0" 00:10:31.339 }, 00:10:31.339 "method": "bdev_nvme_attach_controller" 00:10:31.339 }, 00:10:31.339 { 00:10:31.339 "method": "bdev_wait_for_examine" 00:10:31.339 } 00:10:31.339 ] 00:10:31.339 } 00:10:31.339 ] 00:10:31.339 } 00:10:31.339 [2024-11-27 06:05:36.204008] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:31.339 [2024-11-27 06:05:36.204179] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59999 ] 00:10:31.339 [2024-11-27 06:05:36.356670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.339 [2024-11-27 06:05:36.422810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.598 [2024-11-27 06:05:36.485957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:31.598  [2024-11-27T06:05:36.954Z] Copying: 60/60 [kB] (average 58 MBps) 00:10:31.857 00:10:31.857 06:05:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:10:31.857 06:05:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:31.857 06:05:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:31.857 06:05:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:31.857 [2024-11-27 06:05:36.864563] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:31.857 [2024-11-27 06:05:36.864668] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60018 ] 00:10:31.857 { 00:10:31.857 "subsystems": [ 00:10:31.857 { 00:10:31.857 "subsystem": "bdev", 00:10:31.857 "config": [ 00:10:31.857 { 00:10:31.857 "params": { 00:10:31.857 "trtype": "pcie", 00:10:31.857 "traddr": "0000:00:10.0", 00:10:31.857 "name": "Nvme0" 00:10:31.857 }, 00:10:31.857 "method": "bdev_nvme_attach_controller" 00:10:31.857 }, 00:10:31.857 { 00:10:31.857 "method": "bdev_wait_for_examine" 00:10:31.857 } 00:10:31.857 ] 00:10:31.857 } 00:10:31.857 ] 00:10:31.857 } 00:10:32.116 [2024-11-27 06:05:37.014188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.116 [2024-11-27 06:05:37.075287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.116 [2024-11-27 06:05:37.134614] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:32.375  [2024-11-27T06:05:37.472Z] Copying: 60/60 [kB] (average 58 MBps) 00:10:32.375 00:10:32.375 06:05:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:32.375 06:05:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:10:32.375 06:05:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:32.375 06:05:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:32.375 06:05:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:10:32.375 06:05:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:32.375 06:05:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:32.375 06:05:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:32.375 06:05:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:32.375 06:05:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:32.375 06:05:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:32.633 { 00:10:32.633 "subsystems": [ 00:10:32.633 { 00:10:32.633 "subsystem": "bdev", 00:10:32.633 "config": [ 00:10:32.633 { 00:10:32.633 "params": { 00:10:32.633 "trtype": "pcie", 00:10:32.633 "traddr": "0000:00:10.0", 00:10:32.633 "name": "Nvme0" 00:10:32.633 }, 00:10:32.633 "method": "bdev_nvme_attach_controller" 00:10:32.633 }, 00:10:32.633 { 00:10:32.633 "method": "bdev_wait_for_examine" 00:10:32.633 } 00:10:32.633 ] 00:10:32.633 } 00:10:32.633 ] 00:10:32.633 } 00:10:32.633 [2024-11-27 06:05:37.519224] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:32.633 [2024-11-27 06:05:37.519347] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60028 ] 00:10:32.633 [2024-11-27 06:05:37.672012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.892 [2024-11-27 06:05:37.744991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.892 [2024-11-27 06:05:37.808264] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:32.892  [2024-11-27T06:05:38.246Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:10:33.149 00:10:33.149 06:05:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:10:33.149 06:05:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:33.149 06:05:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:10:33.149 06:05:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:10:33.149 06:05:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:10:33.149 06:05:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:10:33.149 06:05:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:33.149 06:05:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:33.716 06:05:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:10:33.716 06:05:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:33.716 06:05:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:33.716 06:05:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:33.716 { 00:10:33.716 "subsystems": [ 00:10:33.716 { 00:10:33.716 "subsystem": "bdev", 00:10:33.716 "config": [ 00:10:33.716 { 00:10:33.716 "params": { 00:10:33.716 "trtype": "pcie", 00:10:33.716 "traddr": "0000:00:10.0", 00:10:33.716 "name": "Nvme0" 00:10:33.716 }, 00:10:33.716 "method": "bdev_nvme_attach_controller" 00:10:33.716 }, 00:10:33.716 { 00:10:33.716 "method": "bdev_wait_for_examine" 00:10:33.716 } 00:10:33.716 ] 00:10:33.716 } 00:10:33.716 ] 00:10:33.716 } 00:10:33.716 [2024-11-27 06:05:38.742962] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:33.716 [2024-11-27 06:05:38.743109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60053 ] 00:10:33.975 [2024-11-27 06:05:38.892238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.975 [2024-11-27 06:05:38.949520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.975 [2024-11-27 06:05:39.006038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:34.233  [2024-11-27T06:05:39.330Z] Copying: 56/56 [kB] (average 27 MBps) 00:10:34.233 00:10:34.233 06:05:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:10:34.233 06:05:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:34.233 06:05:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:34.233 06:05:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:34.492 [2024-11-27 06:05:39.370215] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:34.492 [2024-11-27 06:05:39.370306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60066 ] 00:10:34.492 { 00:10:34.492 "subsystems": [ 00:10:34.492 { 00:10:34.492 "subsystem": "bdev", 00:10:34.492 "config": [ 00:10:34.492 { 00:10:34.492 "params": { 00:10:34.492 "trtype": "pcie", 00:10:34.492 "traddr": "0000:00:10.0", 00:10:34.492 "name": "Nvme0" 00:10:34.492 }, 00:10:34.492 "method": "bdev_nvme_attach_controller" 00:10:34.492 }, 00:10:34.492 { 00:10:34.492 "method": "bdev_wait_for_examine" 00:10:34.492 } 00:10:34.492 ] 00:10:34.492 } 00:10:34.492 ] 00:10:34.492 } 00:10:34.492 [2024-11-27 06:05:39.517599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.492 [2024-11-27 06:05:39.577860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.751 [2024-11-27 06:05:39.632329] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:34.751  [2024-11-27T06:05:40.117Z] Copying: 56/56 [kB] (average 27 MBps) 00:10:35.020 00:10:35.020 06:05:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:35.020 06:05:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:10:35.020 06:05:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:35.020 06:05:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:35.020 06:05:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:10:35.020 06:05:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:35.020 06:05:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:35.020 06:05:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:35.020 06:05:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:35.020 06:05:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:35.020 06:05:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:35.020 { 00:10:35.020 "subsystems": [ 00:10:35.020 { 00:10:35.020 "subsystem": "bdev", 00:10:35.020 "config": [ 00:10:35.020 { 00:10:35.020 "params": { 00:10:35.020 "trtype": "pcie", 00:10:35.020 "traddr": "0000:00:10.0", 00:10:35.020 "name": "Nvme0" 00:10:35.020 }, 00:10:35.020 "method": "bdev_nvme_attach_controller" 00:10:35.020 }, 00:10:35.020 { 00:10:35.020 "method": "bdev_wait_for_examine" 00:10:35.020 } 00:10:35.020 ] 00:10:35.020 } 00:10:35.020 ] 00:10:35.020 } 00:10:35.020 [2024-11-27 06:05:40.016867] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:35.020 [2024-11-27 06:05:40.017007] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60087 ] 00:10:35.279 [2024-11-27 06:05:40.169779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.279 [2024-11-27 06:05:40.232403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.279 [2024-11-27 06:05:40.287309] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:35.537  [2024-11-27T06:05:40.634Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:35.537 00:10:35.537 06:05:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:35.537 06:05:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:10:35.537 06:05:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:10:35.537 06:05:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:10:35.537 06:05:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:10:35.537 06:05:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:35.537 06:05:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:36.105 06:05:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:10:36.105 06:05:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:36.105 06:05:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:36.105 06:05:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:36.105 [2024-11-27 06:05:41.192454] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:36.105 [2024-11-27 06:05:41.192574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60106 ] 00:10:36.364 { 00:10:36.364 "subsystems": [ 00:10:36.364 { 00:10:36.364 "subsystem": "bdev", 00:10:36.364 "config": [ 00:10:36.364 { 00:10:36.364 "params": { 00:10:36.364 "trtype": "pcie", 00:10:36.364 "traddr": "0000:00:10.0", 00:10:36.364 "name": "Nvme0" 00:10:36.364 }, 00:10:36.364 "method": "bdev_nvme_attach_controller" 00:10:36.364 }, 00:10:36.364 { 00:10:36.364 "method": "bdev_wait_for_examine" 00:10:36.364 } 00:10:36.364 ] 00:10:36.364 } 00:10:36.364 ] 00:10:36.364 } 00:10:36.364 [2024-11-27 06:05:41.349564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.364 [2024-11-27 06:05:41.410762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.624 [2024-11-27 06:05:41.465352] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:36.624  [2024-11-27T06:05:41.979Z] Copying: 56/56 [kB] (average 54 MBps) 00:10:36.882 00:10:36.882 06:05:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:10:36.882 06:05:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:36.882 06:05:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:36.882 06:05:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:36.882 [2024-11-27 06:05:41.825890] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:36.882 [2024-11-27 06:05:41.826003] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60120 ] 00:10:36.882 { 00:10:36.882 "subsystems": [ 00:10:36.882 { 00:10:36.882 "subsystem": "bdev", 00:10:36.882 "config": [ 00:10:36.882 { 00:10:36.882 "params": { 00:10:36.882 "trtype": "pcie", 00:10:36.882 "traddr": "0000:00:10.0", 00:10:36.882 "name": "Nvme0" 00:10:36.882 }, 00:10:36.882 "method": "bdev_nvme_attach_controller" 00:10:36.882 }, 00:10:36.882 { 00:10:36.882 "method": "bdev_wait_for_examine" 00:10:36.882 } 00:10:36.882 ] 00:10:36.882 } 00:10:36.882 ] 00:10:36.882 } 00:10:37.160 [2024-11-27 06:05:41.980892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.160 [2024-11-27 06:05:42.049332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.160 [2024-11-27 06:05:42.109416] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:37.160  [2024-11-27T06:05:42.515Z] Copying: 56/56 [kB] (average 54 MBps) 00:10:37.418 00:10:37.418 06:05:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:37.418 06:05:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:10:37.418 06:05:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:37.418 06:05:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:37.418 06:05:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:10:37.419 06:05:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:37.419 06:05:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:37.419 06:05:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:37.419 06:05:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:37.419 06:05:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:37.419 06:05:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:37.419 [2024-11-27 06:05:42.487709] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:37.419 [2024-11-27 06:05:42.487833] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60135 ] 00:10:37.419 { 00:10:37.419 "subsystems": [ 00:10:37.419 { 00:10:37.419 "subsystem": "bdev", 00:10:37.419 "config": [ 00:10:37.419 { 00:10:37.419 "params": { 00:10:37.419 "trtype": "pcie", 00:10:37.419 "traddr": "0000:00:10.0", 00:10:37.419 "name": "Nvme0" 00:10:37.419 }, 00:10:37.419 "method": "bdev_nvme_attach_controller" 00:10:37.419 }, 00:10:37.419 { 00:10:37.419 "method": "bdev_wait_for_examine" 00:10:37.419 } 00:10:37.419 ] 00:10:37.419 } 00:10:37.419 ] 00:10:37.419 } 00:10:37.677 [2024-11-27 06:05:42.635880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.677 [2024-11-27 06:05:42.705791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.677 [2024-11-27 06:05:42.764926] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:37.935  [2024-11-27T06:05:43.290Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:10:38.193 00:10:38.193 06:05:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:10:38.193 06:05:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:38.193 06:05:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:10:38.193 06:05:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:10:38.193 06:05:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:10:38.193 06:05:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:10:38.194 06:05:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:38.194 06:05:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:38.760 06:05:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:10:38.760 06:05:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:38.760 06:05:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:38.760 06:05:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:38.760 [2024-11-27 06:05:43.727400] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:38.760 [2024-11-27 06:05:43.727506] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60160 ] 00:10:38.760 { 00:10:38.760 "subsystems": [ 00:10:38.760 { 00:10:38.760 "subsystem": "bdev", 00:10:38.760 "config": [ 00:10:38.760 { 00:10:38.760 "params": { 00:10:38.760 "trtype": "pcie", 00:10:38.760 "traddr": "0000:00:10.0", 00:10:38.760 "name": "Nvme0" 00:10:38.760 }, 00:10:38.760 "method": "bdev_nvme_attach_controller" 00:10:38.760 }, 00:10:38.760 { 00:10:38.761 "method": "bdev_wait_for_examine" 00:10:38.761 } 00:10:38.761 ] 00:10:38.761 } 00:10:38.761 ] 00:10:38.761 } 00:10:39.032 [2024-11-27 06:05:43.879701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.032 [2024-11-27 06:05:43.952844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.032 [2024-11-27 06:05:44.011980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:39.325  [2024-11-27T06:05:44.422Z] Copying: 48/48 [kB] (average 46 MBps) 00:10:39.325 00:10:39.325 06:05:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:39.325 06:05:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:10:39.325 06:05:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:39.325 06:05:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:39.325 { 00:10:39.325 "subsystems": [ 00:10:39.325 { 00:10:39.325 "subsystem": "bdev", 00:10:39.325 "config": [ 00:10:39.325 { 00:10:39.325 "params": { 00:10:39.325 "trtype": "pcie", 00:10:39.325 "traddr": "0000:00:10.0", 00:10:39.325 "name": "Nvme0" 00:10:39.325 }, 00:10:39.325 "method": "bdev_nvme_attach_controller" 00:10:39.325 }, 00:10:39.325 { 00:10:39.325 "method": "bdev_wait_for_examine" 00:10:39.325 } 00:10:39.325 ] 00:10:39.325 } 00:10:39.325 ] 00:10:39.325 } 00:10:39.583 [2024-11-27 06:05:44.429859] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:39.583 [2024-11-27 06:05:44.430056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60174 ] 00:10:39.583 [2024-11-27 06:05:44.591970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.583 [2024-11-27 06:05:44.665386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.840 [2024-11-27 06:05:44.730592] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:39.840  [2024-11-27T06:05:45.195Z] Copying: 48/48 [kB] (average 46 MBps) 00:10:40.098 00:10:40.098 06:05:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:40.098 06:05:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:10:40.098 06:05:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:40.098 06:05:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:40.098 06:05:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:10:40.098 06:05:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:40.098 06:05:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:40.098 06:05:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:40.098 06:05:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:40.098 06:05:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:40.098 06:05:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:40.098 { 00:10:40.098 "subsystems": [ 00:10:40.098 { 00:10:40.098 "subsystem": "bdev", 00:10:40.098 "config": [ 00:10:40.098 { 00:10:40.098 "params": { 00:10:40.098 "trtype": "pcie", 00:10:40.098 "traddr": "0000:00:10.0", 00:10:40.098 "name": "Nvme0" 00:10:40.098 }, 00:10:40.098 "method": "bdev_nvme_attach_controller" 00:10:40.098 }, 00:10:40.098 { 00:10:40.098 "method": "bdev_wait_for_examine" 00:10:40.098 } 00:10:40.098 ] 00:10:40.098 } 00:10:40.098 ] 00:10:40.098 } 00:10:40.098 [2024-11-27 06:05:45.127849] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:40.098 [2024-11-27 06:05:45.127950] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60190 ] 00:10:40.356 [2024-11-27 06:05:45.273690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.356 [2024-11-27 06:05:45.338053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.356 [2024-11-27 06:05:45.394309] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:40.614  [2024-11-27T06:05:45.711Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:40.614 00:10:40.871 06:05:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:40.871 06:05:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:10:40.871 06:05:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:10:40.871 06:05:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:10:40.871 06:05:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:10:40.871 06:05:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:40.871 06:05:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:41.437 06:05:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:10:41.437 06:05:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:41.437 06:05:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:41.437 06:05:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:41.437 { 00:10:41.437 "subsystems": [ 00:10:41.437 { 00:10:41.437 "subsystem": "bdev", 00:10:41.437 "config": [ 00:10:41.437 { 00:10:41.437 "params": { 00:10:41.437 "trtype": "pcie", 00:10:41.437 "traddr": "0000:00:10.0", 00:10:41.437 "name": "Nvme0" 00:10:41.437 }, 00:10:41.437 "method": "bdev_nvme_attach_controller" 00:10:41.437 }, 00:10:41.437 { 00:10:41.437 "method": "bdev_wait_for_examine" 00:10:41.437 } 00:10:41.437 ] 00:10:41.437 } 00:10:41.437 ] 00:10:41.437 } 00:10:41.437 [2024-11-27 06:05:46.337695] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:41.437 [2024-11-27 06:05:46.337805] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60214 ] 00:10:41.437 [2024-11-27 06:05:46.484416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.696 [2024-11-27 06:05:46.548054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.696 [2024-11-27 06:05:46.602495] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:41.696  [2024-11-27T06:05:47.052Z] Copying: 48/48 [kB] (average 46 MBps) 00:10:41.955 00:10:41.955 06:05:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:10:41.955 06:05:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:41.955 06:05:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:41.956 06:05:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:41.956 [2024-11-27 06:05:46.955964] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:41.956 [2024-11-27 06:05:46.956072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60228 ] 00:10:41.956 { 00:10:41.956 "subsystems": [ 00:10:41.956 { 00:10:41.956 "subsystem": "bdev", 00:10:41.956 "config": [ 00:10:41.956 { 00:10:41.956 "params": { 00:10:41.956 "trtype": "pcie", 00:10:41.956 "traddr": "0000:00:10.0", 00:10:41.956 "name": "Nvme0" 00:10:41.956 }, 00:10:41.956 "method": "bdev_nvme_attach_controller" 00:10:41.956 }, 00:10:41.956 { 00:10:41.956 "method": "bdev_wait_for_examine" 00:10:41.956 } 00:10:41.956 ] 00:10:41.956 } 00:10:41.956 ] 00:10:41.956 } 00:10:42.213 [2024-11-27 06:05:47.099316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.213 [2024-11-27 06:05:47.164847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.213 [2024-11-27 06:05:47.220919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:42.471  [2024-11-27T06:05:47.568Z] Copying: 48/48 [kB] (average 46 MBps) 00:10:42.471 00:10:42.471 06:05:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:42.471 06:05:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:10:42.471 06:05:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:42.471 06:05:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:42.471 06:05:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:10:42.471 06:05:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:42.471 06:05:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:42.471 06:05:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:42.471 06:05:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:42.471 06:05:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:42.471 06:05:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:42.728 { 00:10:42.728 "subsystems": [ 00:10:42.728 { 00:10:42.728 "subsystem": "bdev", 00:10:42.728 "config": [ 00:10:42.728 { 00:10:42.728 "params": { 00:10:42.728 "trtype": "pcie", 00:10:42.728 "traddr": "0000:00:10.0", 00:10:42.728 "name": "Nvme0" 00:10:42.728 }, 00:10:42.728 "method": "bdev_nvme_attach_controller" 00:10:42.728 }, 00:10:42.728 { 00:10:42.728 "method": "bdev_wait_for_examine" 00:10:42.728 } 00:10:42.728 ] 00:10:42.728 } 00:10:42.728 ] 00:10:42.728 } 00:10:42.728 [2024-11-27 06:05:47.591216] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:42.728 [2024-11-27 06:05:47.591330] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60243 ] 00:10:42.728 [2024-11-27 06:05:47.735614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.728 [2024-11-27 06:05:47.796230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.986 [2024-11-27 06:05:47.853571] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:42.986  [2024-11-27T06:05:48.341Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:43.244 00:10:43.244 00:10:43.244 real 0m15.558s 00:10:43.244 user 0m11.385s 00:10:43.244 sys 0m5.829s 00:10:43.244 06:05:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.244 ************************************ 00:10:43.244 06:05:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:43.244 END TEST dd_rw 00:10:43.244 ************************************ 00:10:43.244 06:05:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:10:43.244 06:05:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:43.244 06:05:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.244 06:05:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:43.244 ************************************ 00:10:43.244 START TEST dd_rw_offset 00:10:43.244 ************************************ 00:10:43.244 06:05:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:10:43.245 06:05:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:10:43.245 06:05:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:10:43.245 06:05:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:10:43.245 06:05:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:10:43.245 06:05:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:10:43.245 06:05:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=h5wx5jdyckg1sri6wdd1zq5wez94eue0f0eao7sezrfejexj3g898eo2k5uxzdq4va1ambzw7is433lkwlkpldi3nqp8q9sjxdiqixm1nz15lbjqnboc2r5g8ajwjsrmh8v565jqckd71or7prp45qg1z6st100i1ryyoaq10yaffxa7wwtwlsg727nxu541juhlkadv5eg6nlk0aay0502q2f9nfjvvbiisaefyfrokk4jif4m9g3dv4s3xqv03ae0z69rhby45tq3fuhflmf4vuqsxx94f4sf491zez6kwosbr0t6yj49nbhky6exkovstc1oo4bey2rju2ccod6aw3d15pxhllompt2f0619fndgcxo0srd539hyyo6vu01k8umivwgbiwuf0adm3wgerlb6l3f9rzzgjcy8yd4lf3awqw9tq2wk36lgsu3iz6lrhf8k9sj7yjypfwkimim4asmldp16p0rd11tqgwwi2fk13ejy45ggtvj6i4ekz5dulmyrpkh3exl08lhykhmhucxe5uitnp4s839p4uthxtj7v1jxo2avg2hfaf27i7fpxbjk10hv7mvn5vaggzqgzxkp8atak4pz7y0ae321zni6pegufhnq5ukx3iepurr0xxrsq9x7sjsd99scrvcf7h44ueiefqjls07d3s1868njdshmgpkw2ozsqygr4qbgdvoh9vsxts31b6mk5c7t4alhtygc4oh5tj6jc24ni5fmpfc21wy3j0j6tgg78tahse8btzs6yqeate7x2qwahwjrkfmeyg4x442pr6hx6m4px6lfv4v0zalkfbvb6q9ne5ford8zkg2cgpezqdkm7nonp5uo9y003frxr0xrtvfzdvtsr57k1lodlce62qsk7wmen8frwtphu3hsqm63jt04mw80c0an7kcyqli32xuqp16qdkmsug8o2msi85gmzdp7knr0y8avxat0vhckndwlodc4lvsvzorhee1yhe3khds0kc7m8p7glk0nevbbuu57sio9fza9c3q0uxd0woojmm6rn4z0gbkx9w261rskzdmrjycu8j6u3ujptjgap2788r66foldai6ea7elq3l8a0geo150j75gw56ckos9vwf5n3z7yuoyyjdevz6b5c8sfgeqs2bw9gu4jf4q9y337inb6j3tnb3egvd3tjirbcgyao48xr2gbhqniwk1hxirt4p2nlapipyr2e8b1noymoyoi64dzevpb45okon2e1j3t5o6cssh4xzujxu71w5z8fi4918vzv4lwtexfy4uo1666wchinuezirav5jh9pbsuyrvknb2lm0g7ey2b6o68svvjgeszmpn1lpycqaxituznkr3zhqbjhrhto0e3qv6hrwq82jw777ncy9kvhwfz1rlizbc7jut14ly3unj1rkvbb5axm82qdkh3hw41p4y1l66cgilf8xujsrty5ugfgef94el897gf2mfuvvgyhscqtjlpcc6k3kav1uvbchy0njv7hos383u8m6d0ate3qbjw9n1hbi9oppzqmdn341go2bb7lnmtd63cs77ja9kg6dp2522jb2n3as9s8hwzyldhrirtu2xig39hbjjbyuyvfermwdngzhq3g9anylgzcszur8rbsrwmcp8wjttypxliaculf3pmmtee4ntms6s8g5twkdhn1m8i3daffnaixx7tph237gws076iqs06yj3r10g6i08pft95o5x3b58sortnjky0z79rt2f230q5breb9cw8rj26s8cb73hx35rca8l3glctminxf9m6pyobp9ao090ni0g6c484gu9i07tskp4actx6qap3iw6cwf13o87j1paqxhsld2f61twrmhtnzt88pmmsbcpicwm3xgfi24ct7amjzqkilasy5dwnnkumkm9p5rk0uwp39u2fiq1pftyhqq6d6p71itqideqw0n5pjde75xucscar8orupfnkhgspqyq9b59c8tkes2sllpioip4kcovaz4byd5f9zl5lqakksd4yirmfb506hogwjq9olfacpa25517oaf30k6xzp3li33jvgv5urpxi8ktwk2tytt88o5ep6ho3q0ztrndawjki9ifinwb0ewmoa1mef6507veu3vdoqidcsz3c8kte9f9qymi07pxcrf0qw5609zi6kz82qkjzi4d5tdaqo6zu8wykvwgr3wdg8lopl1hcj9rte7cnfakkrkf2cxz2j3nxt4wefcvi0d99vw7kcxehjveyw3qkru1wejwfemplajkg6tuab5qw1tqx54ka0gjvzm1lh7sv38pf7pnb2umfpnf71mtack2dn90ho2zh96x45gedtaku33zve9hvcbdsctwr8lpzqf930zns81z8lxwi7mbz6rgltskyynetgyuovfidxq5lf00jvbixuonu3dnsa4u2mrx6aeskk8cufju6rc0z9zk49fs5c4tfigkugok7574xxc1vgb4ctpstk26qoflcqalj9dpiekjigsk5hn68450t5ip021f7c6uzy6i0aie43408met2l4whf2fso2wks0837ufdy2yxwxo7yv84aw3yi5678n8koz44u78ce26x6355b3sjwxjh99aa2g6w2nhe07qsypcafdxd9if4vte74ukqsyqftfhre8ss4a4gd085gakkng1u7yrmai49s2uc1wf8kn8pz5ms5lbk9k09o6ef5o2779qws5nq6fl3r3efk5u4p023qo61kkz10h59gk4dunw6hjedi1hlwro59qloti9vvdhg6w63n4t4ktkne5yo5octhy7gbezcrmmrptvte0qnba44r5gyp7tebu22tbwyb3cm4l894cxknynqit2956m2isgwaxe1vetntv3yj0xysvaudqntjti7lk3fpl7fio5t3fc2dcvixa0qa8va2dhhxnt7q4bwpa2sshcg2ky2ljy5v8hl65jz0eeba35ybl15uqe827zfoy67w7p1i4luhruxhdnhjg3p7ku77jyoi4r8r88x4pwnhd64ikri1g39xg3128jmk01q2r6xmmeihn606wq03x56u9mtcrrok43fynqaady4r7psxae4kqx7f05xjvn1eix8s6ytetacogyhb4gttyhe4qoc18g1qor9k5y0yf30y091z3gmmb86y5x61jshs9xm8uzncifa6sxh8ikzbpj9iuvpeodkaigio779lkl1xun5bn7jc0g1w3qll7ivogcjdqfa3m0fvgs0sjmp64f9n8a9pu185of51aclkytv8k6x6ictbkpts6sjwalv1a97y4ud2r83uibrgjilkk4qdt3vqmkq4t2i4sup97fl8w6sok4k80smbf8ow55hcbmu0d4lc3ts3hm3kdprbbo3ku2sf77ei04hfvwdrjj6tf0kieofa90rhkg0slqgm3s8ax90lygmf5evdsebnpfejt2ea827ejsw5epqfna6ksa4mkin072y1gbtr2hiwa2hqtsh3yeynp516ej1vn7j5t7zdtjqn06bl6czmhfhslcqfyf4aleox7jey2bnofc686wl6dl649kjpray5xmbgldr5n4zkr7zraswsx1syicxhgb33yk4ehrnh2u4ez5fgbh55wuw9ndz207tguwp5do08s7frcgfd41buwvlq32f2xn4dk45uufl86inrh39zwmpkd8csc3b9dm6dcw3j48t0u59944x6hsblktpk3zudlu9mq3g3wat0mgh5tpzxkta9fqmkaddv5gh203dawfjvl6fzj6lq8qyxlal9bbyirl5ltimg3ukxx7f3kci9nocfp75c11p8woekteh6hmwzo42ss6u97wmi3r5da4622aqjl62hjt2lah0xcrxg96w05bjtovf9szvagc9alzoz1o72ozupx2exqywcm7vyeyvzxl7ssd6gxcn4300j0kbicjj3ctxudzeae3jjtv1v283legs78eqhm7ed6grhrhtn3l6su9d501rzratnogb5cgowrjtcigzxwwtps7cn7ljsf6htgjmyws76znk380m 00:10:43.245 06:05:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:10:43.245 06:05:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:10:43.245 06:05:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:10:43.245 06:05:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:10:43.245 [2024-11-27 06:05:48.337621] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:43.245 [2024-11-27 06:05:48.337740] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60280 ] 00:10:43.503 { 00:10:43.503 "subsystems": [ 00:10:43.503 { 00:10:43.503 "subsystem": "bdev", 00:10:43.503 "config": [ 00:10:43.503 { 00:10:43.503 "params": { 00:10:43.503 "trtype": "pcie", 00:10:43.503 "traddr": "0000:00:10.0", 00:10:43.503 "name": "Nvme0" 00:10:43.503 }, 00:10:43.503 "method": "bdev_nvme_attach_controller" 00:10:43.503 }, 00:10:43.503 { 00:10:43.503 "method": "bdev_wait_for_examine" 00:10:43.503 } 00:10:43.503 ] 00:10:43.503 } 00:10:43.503 ] 00:10:43.503 } 00:10:43.503 [2024-11-27 06:05:48.483955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.503 [2024-11-27 06:05:48.547394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.762 [2024-11-27 06:05:48.604636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:43.762  [2024-11-27T06:05:49.117Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:10:44.020 00:10:44.020 06:05:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:10:44.020 06:05:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:10:44.020 06:05:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:10:44.020 06:05:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:10:44.020 [2024-11-27 06:05:48.976593] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:44.020 [2024-11-27 06:05:48.976721] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60295 ] 00:10:44.020 { 00:10:44.020 "subsystems": [ 00:10:44.020 { 00:10:44.020 "subsystem": "bdev", 00:10:44.020 "config": [ 00:10:44.020 { 00:10:44.020 "params": { 00:10:44.020 "trtype": "pcie", 00:10:44.020 "traddr": "0000:00:10.0", 00:10:44.020 "name": "Nvme0" 00:10:44.020 }, 00:10:44.020 "method": "bdev_nvme_attach_controller" 00:10:44.020 }, 00:10:44.020 { 00:10:44.020 "method": "bdev_wait_for_examine" 00:10:44.020 } 00:10:44.020 ] 00:10:44.020 } 00:10:44.020 ] 00:10:44.020 } 00:10:44.279 [2024-11-27 06:05:49.120472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.279 [2024-11-27 06:05:49.190808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.279 [2024-11-27 06:05:49.246772] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:44.279  [2024-11-27T06:05:49.635Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:10:44.538 00:10:44.538 06:05:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:10:44.539 06:05:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ h5wx5jdyckg1sri6wdd1zq5wez94eue0f0eao7sezrfejexj3g898eo2k5uxzdq4va1ambzw7is433lkwlkpldi3nqp8q9sjxdiqixm1nz15lbjqnboc2r5g8ajwjsrmh8v565jqckd71or7prp45qg1z6st100i1ryyoaq10yaffxa7wwtwlsg727nxu541juhlkadv5eg6nlk0aay0502q2f9nfjvvbiisaefyfrokk4jif4m9g3dv4s3xqv03ae0z69rhby45tq3fuhflmf4vuqsxx94f4sf491zez6kwosbr0t6yj49nbhky6exkovstc1oo4bey2rju2ccod6aw3d15pxhllompt2f0619fndgcxo0srd539hyyo6vu01k8umivwgbiwuf0adm3wgerlb6l3f9rzzgjcy8yd4lf3awqw9tq2wk36lgsu3iz6lrhf8k9sj7yjypfwkimim4asmldp16p0rd11tqgwwi2fk13ejy45ggtvj6i4ekz5dulmyrpkh3exl08lhykhmhucxe5uitnp4s839p4uthxtj7v1jxo2avg2hfaf27i7fpxbjk10hv7mvn5vaggzqgzxkp8atak4pz7y0ae321zni6pegufhnq5ukx3iepurr0xxrsq9x7sjsd99scrvcf7h44ueiefqjls07d3s1868njdshmgpkw2ozsqygr4qbgdvoh9vsxts31b6mk5c7t4alhtygc4oh5tj6jc24ni5fmpfc21wy3j0j6tgg78tahse8btzs6yqeate7x2qwahwjrkfmeyg4x442pr6hx6m4px6lfv4v0zalkfbvb6q9ne5ford8zkg2cgpezqdkm7nonp5uo9y003frxr0xrtvfzdvtsr57k1lodlce62qsk7wmen8frwtphu3hsqm63jt04mw80c0an7kcyqli32xuqp16qdkmsug8o2msi85gmzdp7knr0y8avxat0vhckndwlodc4lvsvzorhee1yhe3khds0kc7m8p7glk0nevbbuu57sio9fza9c3q0uxd0woojmm6rn4z0gbkx9w261rskzdmrjycu8j6u3ujptjgap2788r66foldai6ea7elq3l8a0geo150j75gw56ckos9vwf5n3z7yuoyyjdevz6b5c8sfgeqs2bw9gu4jf4q9y337inb6j3tnb3egvd3tjirbcgyao48xr2gbhqniwk1hxirt4p2nlapipyr2e8b1noymoyoi64dzevpb45okon2e1j3t5o6cssh4xzujxu71w5z8fi4918vzv4lwtexfy4uo1666wchinuezirav5jh9pbsuyrvknb2lm0g7ey2b6o68svvjgeszmpn1lpycqaxituznkr3zhqbjhrhto0e3qv6hrwq82jw777ncy9kvhwfz1rlizbc7jut14ly3unj1rkvbb5axm82qdkh3hw41p4y1l66cgilf8xujsrty5ugfgef94el897gf2mfuvvgyhscqtjlpcc6k3kav1uvbchy0njv7hos383u8m6d0ate3qbjw9n1hbi9oppzqmdn341go2bb7lnmtd63cs77ja9kg6dp2522jb2n3as9s8hwzyldhrirtu2xig39hbjjbyuyvfermwdngzhq3g9anylgzcszur8rbsrwmcp8wjttypxliaculf3pmmtee4ntms6s8g5twkdhn1m8i3daffnaixx7tph237gws076iqs06yj3r10g6i08pft95o5x3b58sortnjky0z79rt2f230q5breb9cw8rj26s8cb73hx35rca8l3glctminxf9m6pyobp9ao090ni0g6c484gu9i07tskp4actx6qap3iw6cwf13o87j1paqxhsld2f61twrmhtnzt88pmmsbcpicwm3xgfi24ct7amjzqkilasy5dwnnkumkm9p5rk0uwp39u2fiq1pftyhqq6d6p71itqideqw0n5pjde75xucscar8orupfnkhgspqyq9b59c8tkes2sllpioip4kcovaz4byd5f9zl5lqakksd4yirmfb506hogwjq9olfacpa25517oaf30k6xzp3li33jvgv5urpxi8ktwk2tytt88o5ep6ho3q0ztrndawjki9ifinwb0ewmoa1mef6507veu3vdoqidcsz3c8kte9f9qymi07pxcrf0qw5609zi6kz82qkjzi4d5tdaqo6zu8wykvwgr3wdg8lopl1hcj9rte7cnfakkrkf2cxz2j3nxt4wefcvi0d99vw7kcxehjveyw3qkru1wejwfemplajkg6tuab5qw1tqx54ka0gjvzm1lh7sv38pf7pnb2umfpnf71mtack2dn90ho2zh96x45gedtaku33zve9hvcbdsctwr8lpzqf930zns81z8lxwi7mbz6rgltskyynetgyuovfidxq5lf00jvbixuonu3dnsa4u2mrx6aeskk8cufju6rc0z9zk49fs5c4tfigkugok7574xxc1vgb4ctpstk26qoflcqalj9dpiekjigsk5hn68450t5ip021f7c6uzy6i0aie43408met2l4whf2fso2wks0837ufdy2yxwxo7yv84aw3yi5678n8koz44u78ce26x6355b3sjwxjh99aa2g6w2nhe07qsypcafdxd9if4vte74ukqsyqftfhre8ss4a4gd085gakkng1u7yrmai49s2uc1wf8kn8pz5ms5lbk9k09o6ef5o2779qws5nq6fl3r3efk5u4p023qo61kkz10h59gk4dunw6hjedi1hlwro59qloti9vvdhg6w63n4t4ktkne5yo5octhy7gbezcrmmrptvte0qnba44r5gyp7tebu22tbwyb3cm4l894cxknynqit2956m2isgwaxe1vetntv3yj0xysvaudqntjti7lk3fpl7fio5t3fc2dcvixa0qa8va2dhhxnt7q4bwpa2sshcg2ky2ljy5v8hl65jz0eeba35ybl15uqe827zfoy67w7p1i4luhruxhdnhjg3p7ku77jyoi4r8r88x4pwnhd64ikri1g39xg3128jmk01q2r6xmmeihn606wq03x56u9mtcrrok43fynqaady4r7psxae4kqx7f05xjvn1eix8s6ytetacogyhb4gttyhe4qoc18g1qor9k5y0yf30y091z3gmmb86y5x61jshs9xm8uzncifa6sxh8ikzbpj9iuvpeodkaigio779lkl1xun5bn7jc0g1w3qll7ivogcjdqfa3m0fvgs0sjmp64f9n8a9pu185of51aclkytv8k6x6ictbkpts6sjwalv1a97y4ud2r83uibrgjilkk4qdt3vqmkq4t2i4sup97fl8w6sok4k80smbf8ow55hcbmu0d4lc3ts3hm3kdprbbo3ku2sf77ei04hfvwdrjj6tf0kieofa90rhkg0slqgm3s8ax90lygmf5evdsebnpfejt2ea827ejsw5epqfna6ksa4mkin072y1gbtr2hiwa2hqtsh3yeynp516ej1vn7j5t7zdtjqn06bl6czmhfhslcqfyf4aleox7jey2bnofc686wl6dl649kjpray5xmbgldr5n4zkr7zraswsx1syicxhgb33yk4ehrnh2u4ez5fgbh55wuw9ndz207tguwp5do08s7frcgfd41buwvlq32f2xn4dk45uufl86inrh39zwmpkd8csc3b9dm6dcw3j48t0u59944x6hsblktpk3zudlu9mq3g3wat0mgh5tpzxkta9fqmkaddv5gh203dawfjvl6fzj6lq8qyxlal9bbyirl5ltimg3ukxx7f3kci9nocfp75c11p8woekteh6hmwzo42ss6u97wmi3r5da4622aqjl62hjt2lah0xcrxg96w05bjtovf9szvagc9alzoz1o72ozupx2exqywcm7vyeyvzxl7ssd6gxcn4300j0kbicjj3ctxudzeae3jjtv1v283legs78eqhm7ed6grhrhtn3l6su9d501rzratnogb5cgowrjtcigzxwwtps7cn7ljsf6htgjmyws76znk380m == \h\5\w\x\5\j\d\y\c\k\g\1\s\r\i\6\w\d\d\1\z\q\5\w\e\z\9\4\e\u\e\0\f\0\e\a\o\7\s\e\z\r\f\e\j\e\x\j\3\g\8\9\8\e\o\2\k\5\u\x\z\d\q\4\v\a\1\a\m\b\z\w\7\i\s\4\3\3\l\k\w\l\k\p\l\d\i\3\n\q\p\8\q\9\s\j\x\d\i\q\i\x\m\1\n\z\1\5\l\b\j\q\n\b\o\c\2\r\5\g\8\a\j\w\j\s\r\m\h\8\v\5\6\5\j\q\c\k\d\7\1\o\r\7\p\r\p\4\5\q\g\1\z\6\s\t\1\0\0\i\1\r\y\y\o\a\q\1\0\y\a\f\f\x\a\7\w\w\t\w\l\s\g\7\2\7\n\x\u\5\4\1\j\u\h\l\k\a\d\v\5\e\g\6\n\l\k\0\a\a\y\0\5\0\2\q\2\f\9\n\f\j\v\v\b\i\i\s\a\e\f\y\f\r\o\k\k\4\j\i\f\4\m\9\g\3\d\v\4\s\3\x\q\v\0\3\a\e\0\z\6\9\r\h\b\y\4\5\t\q\3\f\u\h\f\l\m\f\4\v\u\q\s\x\x\9\4\f\4\s\f\4\9\1\z\e\z\6\k\w\o\s\b\r\0\t\6\y\j\4\9\n\b\h\k\y\6\e\x\k\o\v\s\t\c\1\o\o\4\b\e\y\2\r\j\u\2\c\c\o\d\6\a\w\3\d\1\5\p\x\h\l\l\o\m\p\t\2\f\0\6\1\9\f\n\d\g\c\x\o\0\s\r\d\5\3\9\h\y\y\o\6\v\u\0\1\k\8\u\m\i\v\w\g\b\i\w\u\f\0\a\d\m\3\w\g\e\r\l\b\6\l\3\f\9\r\z\z\g\j\c\y\8\y\d\4\l\f\3\a\w\q\w\9\t\q\2\w\k\3\6\l\g\s\u\3\i\z\6\l\r\h\f\8\k\9\s\j\7\y\j\y\p\f\w\k\i\m\i\m\4\a\s\m\l\d\p\1\6\p\0\r\d\1\1\t\q\g\w\w\i\2\f\k\1\3\e\j\y\4\5\g\g\t\v\j\6\i\4\e\k\z\5\d\u\l\m\y\r\p\k\h\3\e\x\l\0\8\l\h\y\k\h\m\h\u\c\x\e\5\u\i\t\n\p\4\s\8\3\9\p\4\u\t\h\x\t\j\7\v\1\j\x\o\2\a\v\g\2\h\f\a\f\2\7\i\7\f\p\x\b\j\k\1\0\h\v\7\m\v\n\5\v\a\g\g\z\q\g\z\x\k\p\8\a\t\a\k\4\p\z\7\y\0\a\e\3\2\1\z\n\i\6\p\e\g\u\f\h\n\q\5\u\k\x\3\i\e\p\u\r\r\0\x\x\r\s\q\9\x\7\s\j\s\d\9\9\s\c\r\v\c\f\7\h\4\4\u\e\i\e\f\q\j\l\s\0\7\d\3\s\1\8\6\8\n\j\d\s\h\m\g\p\k\w\2\o\z\s\q\y\g\r\4\q\b\g\d\v\o\h\9\v\s\x\t\s\3\1\b\6\m\k\5\c\7\t\4\a\l\h\t\y\g\c\4\o\h\5\t\j\6\j\c\2\4\n\i\5\f\m\p\f\c\2\1\w\y\3\j\0\j\6\t\g\g\7\8\t\a\h\s\e\8\b\t\z\s\6\y\q\e\a\t\e\7\x\2\q\w\a\h\w\j\r\k\f\m\e\y\g\4\x\4\4\2\p\r\6\h\x\6\m\4\p\x\6\l\f\v\4\v\0\z\a\l\k\f\b\v\b\6\q\9\n\e\5\f\o\r\d\8\z\k\g\2\c\g\p\e\z\q\d\k\m\7\n\o\n\p\5\u\o\9\y\0\0\3\f\r\x\r\0\x\r\t\v\f\z\d\v\t\s\r\5\7\k\1\l\o\d\l\c\e\6\2\q\s\k\7\w\m\e\n\8\f\r\w\t\p\h\u\3\h\s\q\m\6\3\j\t\0\4\m\w\8\0\c\0\a\n\7\k\c\y\q\l\i\3\2\x\u\q\p\1\6\q\d\k\m\s\u\g\8\o\2\m\s\i\8\5\g\m\z\d\p\7\k\n\r\0\y\8\a\v\x\a\t\0\v\h\c\k\n\d\w\l\o\d\c\4\l\v\s\v\z\o\r\h\e\e\1\y\h\e\3\k\h\d\s\0\k\c\7\m\8\p\7\g\l\k\0\n\e\v\b\b\u\u\5\7\s\i\o\9\f\z\a\9\c\3\q\0\u\x\d\0\w\o\o\j\m\m\6\r\n\4\z\0\g\b\k\x\9\w\2\6\1\r\s\k\z\d\m\r\j\y\c\u\8\j\6\u\3\u\j\p\t\j\g\a\p\2\7\8\8\r\6\6\f\o\l\d\a\i\6\e\a\7\e\l\q\3\l\8\a\0\g\e\o\1\5\0\j\7\5\g\w\5\6\c\k\o\s\9\v\w\f\5\n\3\z\7\y\u\o\y\y\j\d\e\v\z\6\b\5\c\8\s\f\g\e\q\s\2\b\w\9\g\u\4\j\f\4\q\9\y\3\3\7\i\n\b\6\j\3\t\n\b\3\e\g\v\d\3\t\j\i\r\b\c\g\y\a\o\4\8\x\r\2\g\b\h\q\n\i\w\k\1\h\x\i\r\t\4\p\2\n\l\a\p\i\p\y\r\2\e\8\b\1\n\o\y\m\o\y\o\i\6\4\d\z\e\v\p\b\4\5\o\k\o\n\2\e\1\j\3\t\5\o\6\c\s\s\h\4\x\z\u\j\x\u\7\1\w\5\z\8\f\i\4\9\1\8\v\z\v\4\l\w\t\e\x\f\y\4\u\o\1\6\6\6\w\c\h\i\n\u\e\z\i\r\a\v\5\j\h\9\p\b\s\u\y\r\v\k\n\b\2\l\m\0\g\7\e\y\2\b\6\o\6\8\s\v\v\j\g\e\s\z\m\p\n\1\l\p\y\c\q\a\x\i\t\u\z\n\k\r\3\z\h\q\b\j\h\r\h\t\o\0\e\3\q\v\6\h\r\w\q\8\2\j\w\7\7\7\n\c\y\9\k\v\h\w\f\z\1\r\l\i\z\b\c\7\j\u\t\1\4\l\y\3\u\n\j\1\r\k\v\b\b\5\a\x\m\8\2\q\d\k\h\3\h\w\4\1\p\4\y\1\l\6\6\c\g\i\l\f\8\x\u\j\s\r\t\y\5\u\g\f\g\e\f\9\4\e\l\8\9\7\g\f\2\m\f\u\v\v\g\y\h\s\c\q\t\j\l\p\c\c\6\k\3\k\a\v\1\u\v\b\c\h\y\0\n\j\v\7\h\o\s\3\8\3\u\8\m\6\d\0\a\t\e\3\q\b\j\w\9\n\1\h\b\i\9\o\p\p\z\q\m\d\n\3\4\1\g\o\2\b\b\7\l\n\m\t\d\6\3\c\s\7\7\j\a\9\k\g\6\d\p\2\5\2\2\j\b\2\n\3\a\s\9\s\8\h\w\z\y\l\d\h\r\i\r\t\u\2\x\i\g\3\9\h\b\j\j\b\y\u\y\v\f\e\r\m\w\d\n\g\z\h\q\3\g\9\a\n\y\l\g\z\c\s\z\u\r\8\r\b\s\r\w\m\c\p\8\w\j\t\t\y\p\x\l\i\a\c\u\l\f\3\p\m\m\t\e\e\4\n\t\m\s\6\s\8\g\5\t\w\k\d\h\n\1\m\8\i\3\d\a\f\f\n\a\i\x\x\7\t\p\h\2\3\7\g\w\s\0\7\6\i\q\s\0\6\y\j\3\r\1\0\g\6\i\0\8\p\f\t\9\5\o\5\x\3\b\5\8\s\o\r\t\n\j\k\y\0\z\7\9\r\t\2\f\2\3\0\q\5\b\r\e\b\9\c\w\8\r\j\2\6\s\8\c\b\7\3\h\x\3\5\r\c\a\8\l\3\g\l\c\t\m\i\n\x\f\9\m\6\p\y\o\b\p\9\a\o\0\9\0\n\i\0\g\6\c\4\8\4\g\u\9\i\0\7\t\s\k\p\4\a\c\t\x\6\q\a\p\3\i\w\6\c\w\f\1\3\o\8\7\j\1\p\a\q\x\h\s\l\d\2\f\6\1\t\w\r\m\h\t\n\z\t\8\8\p\m\m\s\b\c\p\i\c\w\m\3\x\g\f\i\2\4\c\t\7\a\m\j\z\q\k\i\l\a\s\y\5\d\w\n\n\k\u\m\k\m\9\p\5\r\k\0\u\w\p\3\9\u\2\f\i\q\1\p\f\t\y\h\q\q\6\d\6\p\7\1\i\t\q\i\d\e\q\w\0\n\5\p\j\d\e\7\5\x\u\c\s\c\a\r\8\o\r\u\p\f\n\k\h\g\s\p\q\y\q\9\b\5\9\c\8\t\k\e\s\2\s\l\l\p\i\o\i\p\4\k\c\o\v\a\z\4\b\y\d\5\f\9\z\l\5\l\q\a\k\k\s\d\4\y\i\r\m\f\b\5\0\6\h\o\g\w\j\q\9\o\l\f\a\c\p\a\2\5\5\1\7\o\a\f\3\0\k\6\x\z\p\3\l\i\3\3\j\v\g\v\5\u\r\p\x\i\8\k\t\w\k\2\t\y\t\t\8\8\o\5\e\p\6\h\o\3\q\0\z\t\r\n\d\a\w\j\k\i\9\i\f\i\n\w\b\0\e\w\m\o\a\1\m\e\f\6\5\0\7\v\e\u\3\v\d\o\q\i\d\c\s\z\3\c\8\k\t\e\9\f\9\q\y\m\i\0\7\p\x\c\r\f\0\q\w\5\6\0\9\z\i\6\k\z\8\2\q\k\j\z\i\4\d\5\t\d\a\q\o\6\z\u\8\w\y\k\v\w\g\r\3\w\d\g\8\l\o\p\l\1\h\c\j\9\r\t\e\7\c\n\f\a\k\k\r\k\f\2\c\x\z\2\j\3\n\x\t\4\w\e\f\c\v\i\0\d\9\9\v\w\7\k\c\x\e\h\j\v\e\y\w\3\q\k\r\u\1\w\e\j\w\f\e\m\p\l\a\j\k\g\6\t\u\a\b\5\q\w\1\t\q\x\5\4\k\a\0\g\j\v\z\m\1\l\h\7\s\v\3\8\p\f\7\p\n\b\2\u\m\f\p\n\f\7\1\m\t\a\c\k\2\d\n\9\0\h\o\2\z\h\9\6\x\4\5\g\e\d\t\a\k\u\3\3\z\v\e\9\h\v\c\b\d\s\c\t\w\r\8\l\p\z\q\f\9\3\0\z\n\s\8\1\z\8\l\x\w\i\7\m\b\z\6\r\g\l\t\s\k\y\y\n\e\t\g\y\u\o\v\f\i\d\x\q\5\l\f\0\0\j\v\b\i\x\u\o\n\u\3\d\n\s\a\4\u\2\m\r\x\6\a\e\s\k\k\8\c\u\f\j\u\6\r\c\0\z\9\z\k\4\9\f\s\5\c\4\t\f\i\g\k\u\g\o\k\7\5\7\4\x\x\c\1\v\g\b\4\c\t\p\s\t\k\2\6\q\o\f\l\c\q\a\l\j\9\d\p\i\e\k\j\i\g\s\k\5\h\n\6\8\4\5\0\t\5\i\p\0\2\1\f\7\c\6\u\z\y\6\i\0\a\i\e\4\3\4\0\8\m\e\t\2\l\4\w\h\f\2\f\s\o\2\w\k\s\0\8\3\7\u\f\d\y\2\y\x\w\x\o\7\y\v\8\4\a\w\3\y\i\5\6\7\8\n\8\k\o\z\4\4\u\7\8\c\e\2\6\x\6\3\5\5\b\3\s\j\w\x\j\h\9\9\a\a\2\g\6\w\2\n\h\e\0\7\q\s\y\p\c\a\f\d\x\d\9\i\f\4\v\t\e\7\4\u\k\q\s\y\q\f\t\f\h\r\e\8\s\s\4\a\4\g\d\0\8\5\g\a\k\k\n\g\1\u\7\y\r\m\a\i\4\9\s\2\u\c\1\w\f\8\k\n\8\p\z\5\m\s\5\l\b\k\9\k\0\9\o\6\e\f\5\o\2\7\7\9\q\w\s\5\n\q\6\f\l\3\r\3\e\f\k\5\u\4\p\0\2\3\q\o\6\1\k\k\z\1\0\h\5\9\g\k\4\d\u\n\w\6\h\j\e\d\i\1\h\l\w\r\o\5\9\q\l\o\t\i\9\v\v\d\h\g\6\w\6\3\n\4\t\4\k\t\k\n\e\5\y\o\5\o\c\t\h\y\7\g\b\e\z\c\r\m\m\r\p\t\v\t\e\0\q\n\b\a\4\4\r\5\g\y\p\7\t\e\b\u\2\2\t\b\w\y\b\3\c\m\4\l\8\9\4\c\x\k\n\y\n\q\i\t\2\9\5\6\m\2\i\s\g\w\a\x\e\1\v\e\t\n\t\v\3\y\j\0\x\y\s\v\a\u\d\q\n\t\j\t\i\7\l\k\3\f\p\l\7\f\i\o\5\t\3\f\c\2\d\c\v\i\x\a\0\q\a\8\v\a\2\d\h\h\x\n\t\7\q\4\b\w\p\a\2\s\s\h\c\g\2\k\y\2\l\j\y\5\v\8\h\l\6\5\j\z\0\e\e\b\a\3\5\y\b\l\1\5\u\q\e\8\2\7\z\f\o\y\6\7\w\7\p\1\i\4\l\u\h\r\u\x\h\d\n\h\j\g\3\p\7\k\u\7\7\j\y\o\i\4\r\8\r\8\8\x\4\p\w\n\h\d\6\4\i\k\r\i\1\g\3\9\x\g\3\1\2\8\j\m\k\0\1\q\2\r\6\x\m\m\e\i\h\n\6\0\6\w\q\0\3\x\5\6\u\9\m\t\c\r\r\o\k\4\3\f\y\n\q\a\a\d\y\4\r\7\p\s\x\a\e\4\k\q\x\7\f\0\5\x\j\v\n\1\e\i\x\8\s\6\y\t\e\t\a\c\o\g\y\h\b\4\g\t\t\y\h\e\4\q\o\c\1\8\g\1\q\o\r\9\k\5\y\0\y\f\3\0\y\0\9\1\z\3\g\m\m\b\8\6\y\5\x\6\1\j\s\h\s\9\x\m\8\u\z\n\c\i\f\a\6\s\x\h\8\i\k\z\b\p\j\9\i\u\v\p\e\o\d\k\a\i\g\i\o\7\7\9\l\k\l\1\x\u\n\5\b\n\7\j\c\0\g\1\w\3\q\l\l\7\i\v\o\g\c\j\d\q\f\a\3\m\0\f\v\g\s\0\s\j\m\p\6\4\f\9\n\8\a\9\p\u\1\8\5\o\f\5\1\a\c\l\k\y\t\v\8\k\6\x\6\i\c\t\b\k\p\t\s\6\s\j\w\a\l\v\1\a\9\7\y\4\u\d\2\r\8\3\u\i\b\r\g\j\i\l\k\k\4\q\d\t\3\v\q\m\k\q\4\t\2\i\4\s\u\p\9\7\f\l\8\w\6\s\o\k\4\k\8\0\s\m\b\f\8\o\w\5\5\h\c\b\m\u\0\d\4\l\c\3\t\s\3\h\m\3\k\d\p\r\b\b\o\3\k\u\2\s\f\7\7\e\i\0\4\h\f\v\w\d\r\j\j\6\t\f\0\k\i\e\o\f\a\9\0\r\h\k\g\0\s\l\q\g\m\3\s\8\a\x\9\0\l\y\g\m\f\5\e\v\d\s\e\b\n\p\f\e\j\t\2\e\a\8\2\7\e\j\s\w\5\e\p\q\f\n\a\6\k\s\a\4\m\k\i\n\0\7\2\y\1\g\b\t\r\2\h\i\w\a\2\h\q\t\s\h\3\y\e\y\n\p\5\1\6\e\j\1\v\n\7\j\5\t\7\z\d\t\j\q\n\0\6\b\l\6\c\z\m\h\f\h\s\l\c\q\f\y\f\4\a\l\e\o\x\7\j\e\y\2\b\n\o\f\c\6\8\6\w\l\6\d\l\6\4\9\k\j\p\r\a\y\5\x\m\b\g\l\d\r\5\n\4\z\k\r\7\z\r\a\s\w\s\x\1\s\y\i\c\x\h\g\b\3\3\y\k\4\e\h\r\n\h\2\u\4\e\z\5\f\g\b\h\5\5\w\u\w\9\n\d\z\2\0\7\t\g\u\w\p\5\d\o\0\8\s\7\f\r\c\g\f\d\4\1\b\u\w\v\l\q\3\2\f\2\x\n\4\d\k\4\5\u\u\f\l\8\6\i\n\r\h\3\9\z\w\m\p\k\d\8\c\s\c\3\b\9\d\m\6\d\c\w\3\j\4\8\t\0\u\5\9\9\4\4\x\6\h\s\b\l\k\t\p\k\3\z\u\d\l\u\9\m\q\3\g\3\w\a\t\0\m\g\h\5\t\p\z\x\k\t\a\9\f\q\m\k\a\d\d\v\5\g\h\2\0\3\d\a\w\f\j\v\l\6\f\z\j\6\l\q\8\q\y\x\l\a\l\9\b\b\y\i\r\l\5\l\t\i\m\g\3\u\k\x\x\7\f\3\k\c\i\9\n\o\c\f\p\7\5\c\1\1\p\8\w\o\e\k\t\e\h\6\h\m\w\z\o\4\2\s\s\6\u\9\7\w\m\i\3\r\5\d\a\4\6\2\2\a\q\j\l\6\2\h\j\t\2\l\a\h\0\x\c\r\x\g\9\6\w\0\5\b\j\t\o\v\f\9\s\z\v\a\g\c\9\a\l\z\o\z\1\o\7\2\o\z\u\p\x\2\e\x\q\y\w\c\m\7\v\y\e\y\v\z\x\l\7\s\s\d\6\g\x\c\n\4\3\0\0\j\0\k\b\i\c\j\j\3\c\t\x\u\d\z\e\a\e\3\j\j\t\v\1\v\2\8\3\l\e\g\s\7\8\e\q\h\m\7\e\d\6\g\r\h\r\h\t\n\3\l\6\s\u\9\d\5\0\1\r\z\r\a\t\n\o\g\b\5\c\g\o\w\r\j\t\c\i\g\z\x\w\w\t\p\s\7\c\n\7\l\j\s\f\6\h\t\g\j\m\y\w\s\7\6\z\n\k\3\8\0\m ]] 00:10:44.539 00:10:44.539 real 0m1.316s 00:10:44.539 user 0m0.906s 00:10:44.539 sys 0m0.609s 00:10:44.539 06:05:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.539 ************************************ 00:10:44.539 END TEST dd_rw_offset 00:10:44.539 ************************************ 00:10:44.539 06:05:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:10:44.539 06:05:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:10:44.539 06:05:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:10:44.539 06:05:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:44.539 06:05:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:44.539 06:05:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:10:44.539 06:05:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:44.539 06:05:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:10:44.539 06:05:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:44.539 06:05:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:10:44.539 06:05:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:44.539 06:05:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:44.810 { 00:10:44.810 "subsystems": [ 00:10:44.810 { 00:10:44.810 "subsystem": "bdev", 00:10:44.810 "config": [ 00:10:44.810 { 00:10:44.810 "params": { 00:10:44.810 "trtype": "pcie", 00:10:44.810 "traddr": "0000:00:10.0", 00:10:44.810 "name": "Nvme0" 00:10:44.810 }, 00:10:44.810 "method": "bdev_nvme_attach_controller" 00:10:44.810 }, 00:10:44.810 { 00:10:44.810 "method": "bdev_wait_for_examine" 00:10:44.810 } 00:10:44.810 ] 00:10:44.810 } 00:10:44.810 ] 00:10:44.810 } 00:10:44.810 [2024-11-27 06:05:49.653061] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:44.810 [2024-11-27 06:05:49.653182] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60324 ] 00:10:44.810 [2024-11-27 06:05:49.803473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.810 [2024-11-27 06:05:49.875067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.069 [2024-11-27 06:05:49.934785] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:45.069  [2024-11-27T06:05:50.425Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:45.328 00:10:45.328 06:05:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:45.328 00:10:45.328 real 0m18.813s 00:10:45.328 user 0m13.479s 00:10:45.328 sys 0m7.135s 00:10:45.328 06:05:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.328 06:05:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:45.328 ************************************ 00:10:45.328 END TEST spdk_dd_basic_rw 00:10:45.328 ************************************ 00:10:45.328 06:05:50 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:10:45.328 06:05:50 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:45.328 06:05:50 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.328 06:05:50 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:45.328 ************************************ 00:10:45.328 START TEST spdk_dd_posix 00:10:45.328 ************************************ 00:10:45.328 06:05:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:10:45.328 * Looking for test storage... 00:10:45.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:45.328 06:05:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:45.328 06:05:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:45.328 06:05:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:10:45.587 06:05:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:45.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.588 --rc genhtml_branch_coverage=1 00:10:45.588 --rc genhtml_function_coverage=1 00:10:45.588 --rc genhtml_legend=1 00:10:45.588 --rc geninfo_all_blocks=1 00:10:45.588 --rc geninfo_unexecuted_blocks=1 00:10:45.588 00:10:45.588 ' 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:45.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.588 --rc genhtml_branch_coverage=1 00:10:45.588 --rc genhtml_function_coverage=1 00:10:45.588 --rc genhtml_legend=1 00:10:45.588 --rc geninfo_all_blocks=1 00:10:45.588 --rc geninfo_unexecuted_blocks=1 00:10:45.588 00:10:45.588 ' 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:45.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.588 --rc genhtml_branch_coverage=1 00:10:45.588 --rc genhtml_function_coverage=1 00:10:45.588 --rc genhtml_legend=1 00:10:45.588 --rc geninfo_all_blocks=1 00:10:45.588 --rc geninfo_unexecuted_blocks=1 00:10:45.588 00:10:45.588 ' 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:45.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.588 --rc genhtml_branch_coverage=1 00:10:45.588 --rc genhtml_function_coverage=1 00:10:45.588 --rc genhtml_legend=1 00:10:45.588 --rc geninfo_all_blocks=1 00:10:45.588 --rc geninfo_unexecuted_blocks=1 00:10:45.588 00:10:45.588 ' 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:10:45.588 * First test run, liburing in use 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:45.588 ************************************ 00:10:45.588 START TEST dd_flag_append 00:10:45.588 ************************************ 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=gniyc5om9cog0otod6xsu1taq6m0enc7 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=0s5oz04a3lre0y3zo5asp6rofkm3nqx4 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s gniyc5om9cog0otod6xsu1taq6m0enc7 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 0s5oz04a3lre0y3zo5asp6rofkm3nqx4 00:10:45.588 06:05:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:10:45.588 [2024-11-27 06:05:50.591917] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:45.588 [2024-11-27 06:05:50.592066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60396 ] 00:10:45.848 [2024-11-27 06:05:50.747512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.848 [2024-11-27 06:05:50.810885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.848 [2024-11-27 06:05:50.866885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:45.848  [2024-11-27T06:05:51.204Z] Copying: 32/32 [B] (average 31 kBps) 00:10:46.107 00:10:46.107 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 0s5oz04a3lre0y3zo5asp6rofkm3nqx4gniyc5om9cog0otod6xsu1taq6m0enc7 == \0\s\5\o\z\0\4\a\3\l\r\e\0\y\3\z\o\5\a\s\p\6\r\o\f\k\m\3\n\q\x\4\g\n\i\y\c\5\o\m\9\c\o\g\0\o\t\o\d\6\x\s\u\1\t\a\q\6\m\0\e\n\c\7 ]] 00:10:46.107 00:10:46.107 real 0m0.594s 00:10:46.107 user 0m0.326s 00:10:46.107 sys 0m0.302s 00:10:46.107 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.107 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:10:46.107 ************************************ 00:10:46.107 END TEST dd_flag_append 00:10:46.107 ************************************ 00:10:46.107 06:05:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:10:46.107 06:05:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:46.107 06:05:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.107 06:05:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:46.107 ************************************ 00:10:46.107 START TEST dd_flag_directory 00:10:46.107 ************************************ 00:10:46.107 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:10:46.107 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:46.107 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:10:46.107 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:46.107 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:46.107 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:46.107 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:46.107 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:46.107 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:46.107 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:46.107 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:46.107 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:46.107 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:46.366 [2024-11-27 06:05:51.207779] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:46.366 [2024-11-27 06:05:51.207917] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60425 ] 00:10:46.366 [2024-11-27 06:05:51.358814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.366 [2024-11-27 06:05:51.428903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.625 [2024-11-27 06:05:51.486983] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:46.625 [2024-11-27 06:05:51.529442] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:46.625 [2024-11-27 06:05:51.529533] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:46.625 [2024-11-27 06:05:51.529561] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:46.625 [2024-11-27 06:05:51.656779] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:46.884 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:10:46.884 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:46.884 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:10:46.884 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:10:46.884 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:10:46.884 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:46.884 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:46.884 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:10:46.884 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:46.884 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:46.884 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:46.884 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:46.884 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:46.884 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:46.884 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:46.884 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:46.884 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:46.884 06:05:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:46.884 [2024-11-27 06:05:51.792286] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:46.884 [2024-11-27 06:05:51.792385] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60434 ] 00:10:46.884 [2024-11-27 06:05:51.942279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.141 [2024-11-27 06:05:52.014856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.141 [2024-11-27 06:05:52.075579] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.141 [2024-11-27 06:05:52.119357] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:47.141 [2024-11-27 06:05:52.119426] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:47.141 [2024-11-27 06:05:52.119451] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:47.400 [2024-11-27 06:05:52.247377] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:47.400 ************************************ 00:10:47.400 END TEST dd_flag_directory 00:10:47.400 ************************************ 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:47.400 00:10:47.400 real 0m1.177s 00:10:47.400 user 0m0.672s 00:10:47.400 sys 0m0.294s 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:47.400 ************************************ 00:10:47.400 START TEST dd_flag_nofollow 00:10:47.400 ************************************ 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:47.400 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:47.400 [2024-11-27 06:05:52.434053] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:47.400 [2024-11-27 06:05:52.434160] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60463 ] 00:10:47.659 [2024-11-27 06:05:52.583557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.659 [2024-11-27 06:05:52.657519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.659 [2024-11-27 06:05:52.717474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.918 [2024-11-27 06:05:52.760325] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:10:47.918 [2024-11-27 06:05:52.761022] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:10:47.918 [2024-11-27 06:05:52.761055] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:47.918 [2024-11-27 06:05:52.887190] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:47.918 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:10:47.918 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:47.918 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:10:47.918 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:10:47.918 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:10:47.918 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:47.918 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:47.918 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:10:47.918 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:47.918 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:47.918 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:47.918 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:47.918 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:47.918 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:47.918 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:47.918 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:47.918 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:47.918 06:05:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:48.176 [2024-11-27 06:05:53.026205] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:48.177 [2024-11-27 06:05:53.026320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60478 ] 00:10:48.177 [2024-11-27 06:05:53.175300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.177 [2024-11-27 06:05:53.245689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.435 [2024-11-27 06:05:53.302983] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:48.435 [2024-11-27 06:05:53.343598] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:10:48.435 [2024-11-27 06:05:53.343669] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:10:48.435 [2024-11-27 06:05:53.343694] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:48.435 [2024-11-27 06:05:53.466144] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:48.694 06:05:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:10:48.694 06:05:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:48.694 06:05:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:10:48.694 06:05:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:10:48.694 06:05:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:10:48.694 06:05:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:48.694 06:05:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:10:48.694 06:05:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:10:48.694 06:05:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:10:48.694 06:05:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:48.694 [2024-11-27 06:05:53.597792] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:48.694 [2024-11-27 06:05:53.598108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60484 ] 00:10:48.694 [2024-11-27 06:05:53.745990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.952 [2024-11-27 06:05:53.808296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.952 [2024-11-27 06:05:53.863422] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:48.952  [2024-11-27T06:05:54.308Z] Copying: 512/512 [B] (average 500 kBps) 00:10:49.211 00:10:49.211 06:05:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ i0sy3om497bqewoyivis8a3elynd7fzgd4njyk2st96u9tbjb7ymamevzivuj0m7mtnknnv36nywtrd65gfr8owozfkkijvxixs09qf1i8ngro9r29qdvowwp2ibob32fv8bg49xln3lb3w3b4b1ku2n2h0b8dfjmtogpp0gfdx0c1rganq7st4ux7x2lq4souudod5i2pfvppy4xccu3ynoq4s43ee6oz6jtcm01oxtnq3vkh0uug0yky6q9zjf7u2wmssh7uxzzuai2bm5znh20fa18j70ybrez22d679yf65b09dyqbi89tdz9vata4fuxp2zedbuw3rt1bpvsgrgo2cbnxg0x7qtfea1hy4zgjkntynt9nn7e1yvofhfk0bcjbv2f0osymchrytdxmvadtic896lwbhnr339vvy118s81wnpf5z7mk0yswh2zolj6ptnrup33nkmj8dan2bqks8gea4oh214ab8cx4wb615l0jla9d9ncx2vftzd == \i\0\s\y\3\o\m\4\9\7\b\q\e\w\o\y\i\v\i\s\8\a\3\e\l\y\n\d\7\f\z\g\d\4\n\j\y\k\2\s\t\9\6\u\9\t\b\j\b\7\y\m\a\m\e\v\z\i\v\u\j\0\m\7\m\t\n\k\n\n\v\3\6\n\y\w\t\r\d\6\5\g\f\r\8\o\w\o\z\f\k\k\i\j\v\x\i\x\s\0\9\q\f\1\i\8\n\g\r\o\9\r\2\9\q\d\v\o\w\w\p\2\i\b\o\b\3\2\f\v\8\b\g\4\9\x\l\n\3\l\b\3\w\3\b\4\b\1\k\u\2\n\2\h\0\b\8\d\f\j\m\t\o\g\p\p\0\g\f\d\x\0\c\1\r\g\a\n\q\7\s\t\4\u\x\7\x\2\l\q\4\s\o\u\u\d\o\d\5\i\2\p\f\v\p\p\y\4\x\c\c\u\3\y\n\o\q\4\s\4\3\e\e\6\o\z\6\j\t\c\m\0\1\o\x\t\n\q\3\v\k\h\0\u\u\g\0\y\k\y\6\q\9\z\j\f\7\u\2\w\m\s\s\h\7\u\x\z\z\u\a\i\2\b\m\5\z\n\h\2\0\f\a\1\8\j\7\0\y\b\r\e\z\2\2\d\6\7\9\y\f\6\5\b\0\9\d\y\q\b\i\8\9\t\d\z\9\v\a\t\a\4\f\u\x\p\2\z\e\d\b\u\w\3\r\t\1\b\p\v\s\g\r\g\o\2\c\b\n\x\g\0\x\7\q\t\f\e\a\1\h\y\4\z\g\j\k\n\t\y\n\t\9\n\n\7\e\1\y\v\o\f\h\f\k\0\b\c\j\b\v\2\f\0\o\s\y\m\c\h\r\y\t\d\x\m\v\a\d\t\i\c\8\9\6\l\w\b\h\n\r\3\3\9\v\v\y\1\1\8\s\8\1\w\n\p\f\5\z\7\m\k\0\y\s\w\h\2\z\o\l\j\6\p\t\n\r\u\p\3\3\n\k\m\j\8\d\a\n\2\b\q\k\s\8\g\e\a\4\o\h\2\1\4\a\b\8\c\x\4\w\b\6\1\5\l\0\j\l\a\9\d\9\n\c\x\2\v\f\t\z\d ]] 00:10:49.211 00:10:49.211 real 0m1.708s 00:10:49.211 user 0m0.963s 00:10:49.211 sys 0m0.550s 00:10:49.211 ************************************ 00:10:49.211 END TEST dd_flag_nofollow 00:10:49.211 ************************************ 00:10:49.211 06:05:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.211 06:05:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:10:49.211 06:05:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:10:49.211 06:05:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:49.211 06:05:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.211 06:05:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:49.211 ************************************ 00:10:49.211 START TEST dd_flag_noatime 00:10:49.211 ************************************ 00:10:49.211 06:05:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:10:49.211 06:05:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:10:49.211 06:05:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:10:49.211 06:05:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:10:49.211 06:05:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:10:49.211 06:05:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:10:49.211 06:05:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:49.211 06:05:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732687553 00:10:49.211 06:05:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:49.211 06:05:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732687554 00:10:49.211 06:05:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:10:50.146 06:05:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:50.146 [2024-11-27 06:05:55.217011] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:50.146 [2024-11-27 06:05:55.217122] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60528 ] 00:10:50.403 [2024-11-27 06:05:55.368180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.403 [2024-11-27 06:05:55.443527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.662 [2024-11-27 06:05:55.503113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:50.662  [2024-11-27T06:05:55.759Z] Copying: 512/512 [B] (average 500 kBps) 00:10:50.662 00:10:50.662 06:05:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:50.662 06:05:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732687553 )) 00:10:50.662 06:05:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:50.662 06:05:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732687554 )) 00:10:50.662 06:05:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:50.921 [2024-11-27 06:05:55.810212] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:50.921 [2024-11-27 06:05:55.810328] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60541 ] 00:10:50.921 [2024-11-27 06:05:55.960752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.204 [2024-11-27 06:05:56.033608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.204 [2024-11-27 06:05:56.091872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:51.204  [2024-11-27T06:05:56.560Z] Copying: 512/512 [B] (average 500 kBps) 00:10:51.463 00:10:51.463 06:05:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:51.463 ************************************ 00:10:51.463 END TEST dd_flag_noatime 00:10:51.463 ************************************ 00:10:51.463 06:05:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732687556 )) 00:10:51.463 00:10:51.463 real 0m2.206s 00:10:51.463 user 0m0.666s 00:10:51.463 sys 0m0.599s 00:10:51.463 06:05:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.463 06:05:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:10:51.463 06:05:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:10:51.463 06:05:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:51.463 06:05:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.463 06:05:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:51.463 ************************************ 00:10:51.463 START TEST dd_flags_misc 00:10:51.463 ************************************ 00:10:51.463 06:05:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:10:51.463 06:05:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:10:51.463 06:05:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:10:51.463 06:05:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:10:51.463 06:05:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:10:51.463 06:05:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:10:51.463 06:05:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:10:51.463 06:05:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:10:51.463 06:05:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:51.463 06:05:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:10:51.463 [2024-11-27 06:05:56.449756] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:51.463 [2024-11-27 06:05:56.450007] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60570 ] 00:10:51.722 [2024-11-27 06:05:56.597112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.722 [2024-11-27 06:05:56.667830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.722 [2024-11-27 06:05:56.726340] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:51.722  [2024-11-27T06:05:57.077Z] Copying: 512/512 [B] (average 500 kBps) 00:10:51.980 00:10:51.980 06:05:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xesax02veesyh51un9u81fytexm649j2nkofce3hzfkijckrdyxtzux3g9exqsuf2tb70ac3dff11rjhoeozcfvg2y17vickh8rrgp3nyooip5mjcjsr7ssvn819thzsnmmct5mz654epplr4o4v45qvbn6tlbb1tr6s015dxq0iw7kahzqpdsibrxxc260q16xgng3bwgojav0uksjmhu5vrg57tps9reyjes2cja2q6yu1a9123gy9rlr3ex4b2l1u6k0x7q8wrjikucp4dn8bbe11cadj4n8xvu3zpkldejwwhkw7iysdobqr7lyu8vn2xzot06ftqq37lvddo3t7gtk5wzjp9mjl9senu9kj4nlkaoxu48n7gr04hmcjgcpx9ryon885krc5c6rh9pggor38nv719cnvy1k3wfueybnpfj8q0yp83liww05kxm7d0ar1wfqnyvh0hazparxx5n0npd6ib36bp9z6gzmqcdj72swnfg7cyk1nsqzf == \x\e\s\a\x\0\2\v\e\e\s\y\h\5\1\u\n\9\u\8\1\f\y\t\e\x\m\6\4\9\j\2\n\k\o\f\c\e\3\h\z\f\k\i\j\c\k\r\d\y\x\t\z\u\x\3\g\9\e\x\q\s\u\f\2\t\b\7\0\a\c\3\d\f\f\1\1\r\j\h\o\e\o\z\c\f\v\g\2\y\1\7\v\i\c\k\h\8\r\r\g\p\3\n\y\o\o\i\p\5\m\j\c\j\s\r\7\s\s\v\n\8\1\9\t\h\z\s\n\m\m\c\t\5\m\z\6\5\4\e\p\p\l\r\4\o\4\v\4\5\q\v\b\n\6\t\l\b\b\1\t\r\6\s\0\1\5\d\x\q\0\i\w\7\k\a\h\z\q\p\d\s\i\b\r\x\x\c\2\6\0\q\1\6\x\g\n\g\3\b\w\g\o\j\a\v\0\u\k\s\j\m\h\u\5\v\r\g\5\7\t\p\s\9\r\e\y\j\e\s\2\c\j\a\2\q\6\y\u\1\a\9\1\2\3\g\y\9\r\l\r\3\e\x\4\b\2\l\1\u\6\k\0\x\7\q\8\w\r\j\i\k\u\c\p\4\d\n\8\b\b\e\1\1\c\a\d\j\4\n\8\x\v\u\3\z\p\k\l\d\e\j\w\w\h\k\w\7\i\y\s\d\o\b\q\r\7\l\y\u\8\v\n\2\x\z\o\t\0\6\f\t\q\q\3\7\l\v\d\d\o\3\t\7\g\t\k\5\w\z\j\p\9\m\j\l\9\s\e\n\u\9\k\j\4\n\l\k\a\o\x\u\4\8\n\7\g\r\0\4\h\m\c\j\g\c\p\x\9\r\y\o\n\8\8\5\k\r\c\5\c\6\r\h\9\p\g\g\o\r\3\8\n\v\7\1\9\c\n\v\y\1\k\3\w\f\u\e\y\b\n\p\f\j\8\q\0\y\p\8\3\l\i\w\w\0\5\k\x\m\7\d\0\a\r\1\w\f\q\n\y\v\h\0\h\a\z\p\a\r\x\x\5\n\0\n\p\d\6\i\b\3\6\b\p\9\z\6\g\z\m\q\c\d\j\7\2\s\w\n\f\g\7\c\y\k\1\n\s\q\z\f ]] 00:10:51.980 06:05:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:51.980 06:05:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:10:51.980 [2024-11-27 06:05:57.020708] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:51.980 [2024-11-27 06:05:57.021105] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60585 ] 00:10:52.238 [2024-11-27 06:05:57.174664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.238 [2024-11-27 06:05:57.247171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.238 [2024-11-27 06:05:57.305172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:52.498  [2024-11-27T06:05:57.595Z] Copying: 512/512 [B] (average 500 kBps) 00:10:52.498 00:10:52.498 06:05:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xesax02veesyh51un9u81fytexm649j2nkofce3hzfkijckrdyxtzux3g9exqsuf2tb70ac3dff11rjhoeozcfvg2y17vickh8rrgp3nyooip5mjcjsr7ssvn819thzsnmmct5mz654epplr4o4v45qvbn6tlbb1tr6s015dxq0iw7kahzqpdsibrxxc260q16xgng3bwgojav0uksjmhu5vrg57tps9reyjes2cja2q6yu1a9123gy9rlr3ex4b2l1u6k0x7q8wrjikucp4dn8bbe11cadj4n8xvu3zpkldejwwhkw7iysdobqr7lyu8vn2xzot06ftqq37lvddo3t7gtk5wzjp9mjl9senu9kj4nlkaoxu48n7gr04hmcjgcpx9ryon885krc5c6rh9pggor38nv719cnvy1k3wfueybnpfj8q0yp83liww05kxm7d0ar1wfqnyvh0hazparxx5n0npd6ib36bp9z6gzmqcdj72swnfg7cyk1nsqzf == \x\e\s\a\x\0\2\v\e\e\s\y\h\5\1\u\n\9\u\8\1\f\y\t\e\x\m\6\4\9\j\2\n\k\o\f\c\e\3\h\z\f\k\i\j\c\k\r\d\y\x\t\z\u\x\3\g\9\e\x\q\s\u\f\2\t\b\7\0\a\c\3\d\f\f\1\1\r\j\h\o\e\o\z\c\f\v\g\2\y\1\7\v\i\c\k\h\8\r\r\g\p\3\n\y\o\o\i\p\5\m\j\c\j\s\r\7\s\s\v\n\8\1\9\t\h\z\s\n\m\m\c\t\5\m\z\6\5\4\e\p\p\l\r\4\o\4\v\4\5\q\v\b\n\6\t\l\b\b\1\t\r\6\s\0\1\5\d\x\q\0\i\w\7\k\a\h\z\q\p\d\s\i\b\r\x\x\c\2\6\0\q\1\6\x\g\n\g\3\b\w\g\o\j\a\v\0\u\k\s\j\m\h\u\5\v\r\g\5\7\t\p\s\9\r\e\y\j\e\s\2\c\j\a\2\q\6\y\u\1\a\9\1\2\3\g\y\9\r\l\r\3\e\x\4\b\2\l\1\u\6\k\0\x\7\q\8\w\r\j\i\k\u\c\p\4\d\n\8\b\b\e\1\1\c\a\d\j\4\n\8\x\v\u\3\z\p\k\l\d\e\j\w\w\h\k\w\7\i\y\s\d\o\b\q\r\7\l\y\u\8\v\n\2\x\z\o\t\0\6\f\t\q\q\3\7\l\v\d\d\o\3\t\7\g\t\k\5\w\z\j\p\9\m\j\l\9\s\e\n\u\9\k\j\4\n\l\k\a\o\x\u\4\8\n\7\g\r\0\4\h\m\c\j\g\c\p\x\9\r\y\o\n\8\8\5\k\r\c\5\c\6\r\h\9\p\g\g\o\r\3\8\n\v\7\1\9\c\n\v\y\1\k\3\w\f\u\e\y\b\n\p\f\j\8\q\0\y\p\8\3\l\i\w\w\0\5\k\x\m\7\d\0\a\r\1\w\f\q\n\y\v\h\0\h\a\z\p\a\r\x\x\5\n\0\n\p\d\6\i\b\3\6\b\p\9\z\6\g\z\m\q\c\d\j\7\2\s\w\n\f\g\7\c\y\k\1\n\s\q\z\f ]] 00:10:52.498 06:05:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:52.498 06:05:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:10:52.756 [2024-11-27 06:05:57.598222] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:52.756 [2024-11-27 06:05:57.598326] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60589 ] 00:10:52.756 [2024-11-27 06:05:57.748839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.756 [2024-11-27 06:05:57.821900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.014 [2024-11-27 06:05:57.880651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:53.014  [2024-11-27T06:05:58.371Z] Copying: 512/512 [B] (average 166 kBps) 00:10:53.274 00:10:53.274 06:05:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xesax02veesyh51un9u81fytexm649j2nkofce3hzfkijckrdyxtzux3g9exqsuf2tb70ac3dff11rjhoeozcfvg2y17vickh8rrgp3nyooip5mjcjsr7ssvn819thzsnmmct5mz654epplr4o4v45qvbn6tlbb1tr6s015dxq0iw7kahzqpdsibrxxc260q16xgng3bwgojav0uksjmhu5vrg57tps9reyjes2cja2q6yu1a9123gy9rlr3ex4b2l1u6k0x7q8wrjikucp4dn8bbe11cadj4n8xvu3zpkldejwwhkw7iysdobqr7lyu8vn2xzot06ftqq37lvddo3t7gtk5wzjp9mjl9senu9kj4nlkaoxu48n7gr04hmcjgcpx9ryon885krc5c6rh9pggor38nv719cnvy1k3wfueybnpfj8q0yp83liww05kxm7d0ar1wfqnyvh0hazparxx5n0npd6ib36bp9z6gzmqcdj72swnfg7cyk1nsqzf == \x\e\s\a\x\0\2\v\e\e\s\y\h\5\1\u\n\9\u\8\1\f\y\t\e\x\m\6\4\9\j\2\n\k\o\f\c\e\3\h\z\f\k\i\j\c\k\r\d\y\x\t\z\u\x\3\g\9\e\x\q\s\u\f\2\t\b\7\0\a\c\3\d\f\f\1\1\r\j\h\o\e\o\z\c\f\v\g\2\y\1\7\v\i\c\k\h\8\r\r\g\p\3\n\y\o\o\i\p\5\m\j\c\j\s\r\7\s\s\v\n\8\1\9\t\h\z\s\n\m\m\c\t\5\m\z\6\5\4\e\p\p\l\r\4\o\4\v\4\5\q\v\b\n\6\t\l\b\b\1\t\r\6\s\0\1\5\d\x\q\0\i\w\7\k\a\h\z\q\p\d\s\i\b\r\x\x\c\2\6\0\q\1\6\x\g\n\g\3\b\w\g\o\j\a\v\0\u\k\s\j\m\h\u\5\v\r\g\5\7\t\p\s\9\r\e\y\j\e\s\2\c\j\a\2\q\6\y\u\1\a\9\1\2\3\g\y\9\r\l\r\3\e\x\4\b\2\l\1\u\6\k\0\x\7\q\8\w\r\j\i\k\u\c\p\4\d\n\8\b\b\e\1\1\c\a\d\j\4\n\8\x\v\u\3\z\p\k\l\d\e\j\w\w\h\k\w\7\i\y\s\d\o\b\q\r\7\l\y\u\8\v\n\2\x\z\o\t\0\6\f\t\q\q\3\7\l\v\d\d\o\3\t\7\g\t\k\5\w\z\j\p\9\m\j\l\9\s\e\n\u\9\k\j\4\n\l\k\a\o\x\u\4\8\n\7\g\r\0\4\h\m\c\j\g\c\p\x\9\r\y\o\n\8\8\5\k\r\c\5\c\6\r\h\9\p\g\g\o\r\3\8\n\v\7\1\9\c\n\v\y\1\k\3\w\f\u\e\y\b\n\p\f\j\8\q\0\y\p\8\3\l\i\w\w\0\5\k\x\m\7\d\0\a\r\1\w\f\q\n\y\v\h\0\h\a\z\p\a\r\x\x\5\n\0\n\p\d\6\i\b\3\6\b\p\9\z\6\g\z\m\q\c\d\j\7\2\s\w\n\f\g\7\c\y\k\1\n\s\q\z\f ]] 00:10:53.274 06:05:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:53.274 06:05:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:10:53.274 [2024-11-27 06:05:58.187649] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:53.274 [2024-11-27 06:05:58.187755] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60606 ] 00:10:53.274 [2024-11-27 06:05:58.336235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.533 [2024-11-27 06:05:58.404551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.533 [2024-11-27 06:05:58.462201] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:53.533  [2024-11-27T06:05:58.889Z] Copying: 512/512 [B] (average 25 kBps) 00:10:53.792 00:10:53.792 06:05:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xesax02veesyh51un9u81fytexm649j2nkofce3hzfkijckrdyxtzux3g9exqsuf2tb70ac3dff11rjhoeozcfvg2y17vickh8rrgp3nyooip5mjcjsr7ssvn819thzsnmmct5mz654epplr4o4v45qvbn6tlbb1tr6s015dxq0iw7kahzqpdsibrxxc260q16xgng3bwgojav0uksjmhu5vrg57tps9reyjes2cja2q6yu1a9123gy9rlr3ex4b2l1u6k0x7q8wrjikucp4dn8bbe11cadj4n8xvu3zpkldejwwhkw7iysdobqr7lyu8vn2xzot06ftqq37lvddo3t7gtk5wzjp9mjl9senu9kj4nlkaoxu48n7gr04hmcjgcpx9ryon885krc5c6rh9pggor38nv719cnvy1k3wfueybnpfj8q0yp83liww05kxm7d0ar1wfqnyvh0hazparxx5n0npd6ib36bp9z6gzmqcdj72swnfg7cyk1nsqzf == \x\e\s\a\x\0\2\v\e\e\s\y\h\5\1\u\n\9\u\8\1\f\y\t\e\x\m\6\4\9\j\2\n\k\o\f\c\e\3\h\z\f\k\i\j\c\k\r\d\y\x\t\z\u\x\3\g\9\e\x\q\s\u\f\2\t\b\7\0\a\c\3\d\f\f\1\1\r\j\h\o\e\o\z\c\f\v\g\2\y\1\7\v\i\c\k\h\8\r\r\g\p\3\n\y\o\o\i\p\5\m\j\c\j\s\r\7\s\s\v\n\8\1\9\t\h\z\s\n\m\m\c\t\5\m\z\6\5\4\e\p\p\l\r\4\o\4\v\4\5\q\v\b\n\6\t\l\b\b\1\t\r\6\s\0\1\5\d\x\q\0\i\w\7\k\a\h\z\q\p\d\s\i\b\r\x\x\c\2\6\0\q\1\6\x\g\n\g\3\b\w\g\o\j\a\v\0\u\k\s\j\m\h\u\5\v\r\g\5\7\t\p\s\9\r\e\y\j\e\s\2\c\j\a\2\q\6\y\u\1\a\9\1\2\3\g\y\9\r\l\r\3\e\x\4\b\2\l\1\u\6\k\0\x\7\q\8\w\r\j\i\k\u\c\p\4\d\n\8\b\b\e\1\1\c\a\d\j\4\n\8\x\v\u\3\z\p\k\l\d\e\j\w\w\h\k\w\7\i\y\s\d\o\b\q\r\7\l\y\u\8\v\n\2\x\z\o\t\0\6\f\t\q\q\3\7\l\v\d\d\o\3\t\7\g\t\k\5\w\z\j\p\9\m\j\l\9\s\e\n\u\9\k\j\4\n\l\k\a\o\x\u\4\8\n\7\g\r\0\4\h\m\c\j\g\c\p\x\9\r\y\o\n\8\8\5\k\r\c\5\c\6\r\h\9\p\g\g\o\r\3\8\n\v\7\1\9\c\n\v\y\1\k\3\w\f\u\e\y\b\n\p\f\j\8\q\0\y\p\8\3\l\i\w\w\0\5\k\x\m\7\d\0\a\r\1\w\f\q\n\y\v\h\0\h\a\z\p\a\r\x\x\5\n\0\n\p\d\6\i\b\3\6\b\p\9\z\6\g\z\m\q\c\d\j\7\2\s\w\n\f\g\7\c\y\k\1\n\s\q\z\f ]] 00:10:53.792 06:05:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:10:53.792 06:05:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:10:53.792 06:05:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:10:53.792 06:05:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:10:53.792 06:05:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:53.792 06:05:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:10:53.792 [2024-11-27 06:05:58.777123] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:53.792 [2024-11-27 06:05:58.777438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60610 ] 00:10:54.051 [2024-11-27 06:05:58.923320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.051 [2024-11-27 06:05:58.989848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.051 [2024-11-27 06:05:59.048156] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:54.051  [2024-11-27T06:05:59.406Z] Copying: 512/512 [B] (average 500 kBps) 00:10:54.309 00:10:54.309 06:05:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ m0ztqisxbphvu1q02foxn4n1z1ajjx7l6iycig6r84ovfbrzp71eaw01m68kqpttys004ojrjosea6ic9jakrzs23c32osjmcynczq1e9abbp475e63waa6y6szt953uzmrpbgwkzfcywp1o3rl07e220znmtnb19iap2ao90uynw71jy4zc1qfyochqa8bkd5jd12gcka5fjlkp5d6rwl4ys4qf9r3y7sbopygtly0bka0vs3piqoyx48pb8qpbhxxkbdz3klfnwuol3cepov6ctgtoaom991x7azr7awdot3dd0rx7xuqx06ip6t8erh6hholh1koohw6toue3dg5usl3uh49jgu2yjcpr51727qwqjesfdv0lum3kensn2ddwi36ylzogm8snf9cdur9d0cid509v2kuhh858rb7pdxqlgojttclr525jtm56yuttc5sgl5t64qzc51kliyitz6tnfkjve6vryazv0828d8ngj3fh54qos558ww81 == \m\0\z\t\q\i\s\x\b\p\h\v\u\1\q\0\2\f\o\x\n\4\n\1\z\1\a\j\j\x\7\l\6\i\y\c\i\g\6\r\8\4\o\v\f\b\r\z\p\7\1\e\a\w\0\1\m\6\8\k\q\p\t\t\y\s\0\0\4\o\j\r\j\o\s\e\a\6\i\c\9\j\a\k\r\z\s\2\3\c\3\2\o\s\j\m\c\y\n\c\z\q\1\e\9\a\b\b\p\4\7\5\e\6\3\w\a\a\6\y\6\s\z\t\9\5\3\u\z\m\r\p\b\g\w\k\z\f\c\y\w\p\1\o\3\r\l\0\7\e\2\2\0\z\n\m\t\n\b\1\9\i\a\p\2\a\o\9\0\u\y\n\w\7\1\j\y\4\z\c\1\q\f\y\o\c\h\q\a\8\b\k\d\5\j\d\1\2\g\c\k\a\5\f\j\l\k\p\5\d\6\r\w\l\4\y\s\4\q\f\9\r\3\y\7\s\b\o\p\y\g\t\l\y\0\b\k\a\0\v\s\3\p\i\q\o\y\x\4\8\p\b\8\q\p\b\h\x\x\k\b\d\z\3\k\l\f\n\w\u\o\l\3\c\e\p\o\v\6\c\t\g\t\o\a\o\m\9\9\1\x\7\a\z\r\7\a\w\d\o\t\3\d\d\0\r\x\7\x\u\q\x\0\6\i\p\6\t\8\e\r\h\6\h\h\o\l\h\1\k\o\o\h\w\6\t\o\u\e\3\d\g\5\u\s\l\3\u\h\4\9\j\g\u\2\y\j\c\p\r\5\1\7\2\7\q\w\q\j\e\s\f\d\v\0\l\u\m\3\k\e\n\s\n\2\d\d\w\i\3\6\y\l\z\o\g\m\8\s\n\f\9\c\d\u\r\9\d\0\c\i\d\5\0\9\v\2\k\u\h\h\8\5\8\r\b\7\p\d\x\q\l\g\o\j\t\t\c\l\r\5\2\5\j\t\m\5\6\y\u\t\t\c\5\s\g\l\5\t\6\4\q\z\c\5\1\k\l\i\y\i\t\z\6\t\n\f\k\j\v\e\6\v\r\y\a\z\v\0\8\2\8\d\8\n\g\j\3\f\h\5\4\q\o\s\5\5\8\w\w\8\1 ]] 00:10:54.309 06:05:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:54.309 06:05:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:10:54.309 [2024-11-27 06:05:59.325868] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:54.309 [2024-11-27 06:05:59.326201] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60625 ] 00:10:54.570 [2024-11-27 06:05:59.474825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.570 [2024-11-27 06:05:59.546937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.570 [2024-11-27 06:05:59.606161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:54.570  [2024-11-27T06:05:59.925Z] Copying: 512/512 [B] (average 500 kBps) 00:10:54.828 00:10:54.828 06:05:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ m0ztqisxbphvu1q02foxn4n1z1ajjx7l6iycig6r84ovfbrzp71eaw01m68kqpttys004ojrjosea6ic9jakrzs23c32osjmcynczq1e9abbp475e63waa6y6szt953uzmrpbgwkzfcywp1o3rl07e220znmtnb19iap2ao90uynw71jy4zc1qfyochqa8bkd5jd12gcka5fjlkp5d6rwl4ys4qf9r3y7sbopygtly0bka0vs3piqoyx48pb8qpbhxxkbdz3klfnwuol3cepov6ctgtoaom991x7azr7awdot3dd0rx7xuqx06ip6t8erh6hholh1koohw6toue3dg5usl3uh49jgu2yjcpr51727qwqjesfdv0lum3kensn2ddwi36ylzogm8snf9cdur9d0cid509v2kuhh858rb7pdxqlgojttclr525jtm56yuttc5sgl5t64qzc51kliyitz6tnfkjve6vryazv0828d8ngj3fh54qos558ww81 == \m\0\z\t\q\i\s\x\b\p\h\v\u\1\q\0\2\f\o\x\n\4\n\1\z\1\a\j\j\x\7\l\6\i\y\c\i\g\6\r\8\4\o\v\f\b\r\z\p\7\1\e\a\w\0\1\m\6\8\k\q\p\t\t\y\s\0\0\4\o\j\r\j\o\s\e\a\6\i\c\9\j\a\k\r\z\s\2\3\c\3\2\o\s\j\m\c\y\n\c\z\q\1\e\9\a\b\b\p\4\7\5\e\6\3\w\a\a\6\y\6\s\z\t\9\5\3\u\z\m\r\p\b\g\w\k\z\f\c\y\w\p\1\o\3\r\l\0\7\e\2\2\0\z\n\m\t\n\b\1\9\i\a\p\2\a\o\9\0\u\y\n\w\7\1\j\y\4\z\c\1\q\f\y\o\c\h\q\a\8\b\k\d\5\j\d\1\2\g\c\k\a\5\f\j\l\k\p\5\d\6\r\w\l\4\y\s\4\q\f\9\r\3\y\7\s\b\o\p\y\g\t\l\y\0\b\k\a\0\v\s\3\p\i\q\o\y\x\4\8\p\b\8\q\p\b\h\x\x\k\b\d\z\3\k\l\f\n\w\u\o\l\3\c\e\p\o\v\6\c\t\g\t\o\a\o\m\9\9\1\x\7\a\z\r\7\a\w\d\o\t\3\d\d\0\r\x\7\x\u\q\x\0\6\i\p\6\t\8\e\r\h\6\h\h\o\l\h\1\k\o\o\h\w\6\t\o\u\e\3\d\g\5\u\s\l\3\u\h\4\9\j\g\u\2\y\j\c\p\r\5\1\7\2\7\q\w\q\j\e\s\f\d\v\0\l\u\m\3\k\e\n\s\n\2\d\d\w\i\3\6\y\l\z\o\g\m\8\s\n\f\9\c\d\u\r\9\d\0\c\i\d\5\0\9\v\2\k\u\h\h\8\5\8\r\b\7\p\d\x\q\l\g\o\j\t\t\c\l\r\5\2\5\j\t\m\5\6\y\u\t\t\c\5\s\g\l\5\t\6\4\q\z\c\5\1\k\l\i\y\i\t\z\6\t\n\f\k\j\v\e\6\v\r\y\a\z\v\0\8\2\8\d\8\n\g\j\3\f\h\5\4\q\o\s\5\5\8\w\w\8\1 ]] 00:10:54.828 06:05:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:54.828 06:05:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:10:54.828 [2024-11-27 06:05:59.901701] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:54.828 [2024-11-27 06:05:59.901833] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60633 ] 00:10:55.087 [2024-11-27 06:06:00.053110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.087 [2024-11-27 06:06:00.126417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.344 [2024-11-27 06:06:00.185161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:55.344  [2024-11-27T06:06:00.441Z] Copying: 512/512 [B] (average 166 kBps) 00:10:55.344 00:10:55.344 06:06:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ m0ztqisxbphvu1q02foxn4n1z1ajjx7l6iycig6r84ovfbrzp71eaw01m68kqpttys004ojrjosea6ic9jakrzs23c32osjmcynczq1e9abbp475e63waa6y6szt953uzmrpbgwkzfcywp1o3rl07e220znmtnb19iap2ao90uynw71jy4zc1qfyochqa8bkd5jd12gcka5fjlkp5d6rwl4ys4qf9r3y7sbopygtly0bka0vs3piqoyx48pb8qpbhxxkbdz3klfnwuol3cepov6ctgtoaom991x7azr7awdot3dd0rx7xuqx06ip6t8erh6hholh1koohw6toue3dg5usl3uh49jgu2yjcpr51727qwqjesfdv0lum3kensn2ddwi36ylzogm8snf9cdur9d0cid509v2kuhh858rb7pdxqlgojttclr525jtm56yuttc5sgl5t64qzc51kliyitz6tnfkjve6vryazv0828d8ngj3fh54qos558ww81 == \m\0\z\t\q\i\s\x\b\p\h\v\u\1\q\0\2\f\o\x\n\4\n\1\z\1\a\j\j\x\7\l\6\i\y\c\i\g\6\r\8\4\o\v\f\b\r\z\p\7\1\e\a\w\0\1\m\6\8\k\q\p\t\t\y\s\0\0\4\o\j\r\j\o\s\e\a\6\i\c\9\j\a\k\r\z\s\2\3\c\3\2\o\s\j\m\c\y\n\c\z\q\1\e\9\a\b\b\p\4\7\5\e\6\3\w\a\a\6\y\6\s\z\t\9\5\3\u\z\m\r\p\b\g\w\k\z\f\c\y\w\p\1\o\3\r\l\0\7\e\2\2\0\z\n\m\t\n\b\1\9\i\a\p\2\a\o\9\0\u\y\n\w\7\1\j\y\4\z\c\1\q\f\y\o\c\h\q\a\8\b\k\d\5\j\d\1\2\g\c\k\a\5\f\j\l\k\p\5\d\6\r\w\l\4\y\s\4\q\f\9\r\3\y\7\s\b\o\p\y\g\t\l\y\0\b\k\a\0\v\s\3\p\i\q\o\y\x\4\8\p\b\8\q\p\b\h\x\x\k\b\d\z\3\k\l\f\n\w\u\o\l\3\c\e\p\o\v\6\c\t\g\t\o\a\o\m\9\9\1\x\7\a\z\r\7\a\w\d\o\t\3\d\d\0\r\x\7\x\u\q\x\0\6\i\p\6\t\8\e\r\h\6\h\h\o\l\h\1\k\o\o\h\w\6\t\o\u\e\3\d\g\5\u\s\l\3\u\h\4\9\j\g\u\2\y\j\c\p\r\5\1\7\2\7\q\w\q\j\e\s\f\d\v\0\l\u\m\3\k\e\n\s\n\2\d\d\w\i\3\6\y\l\z\o\g\m\8\s\n\f\9\c\d\u\r\9\d\0\c\i\d\5\0\9\v\2\k\u\h\h\8\5\8\r\b\7\p\d\x\q\l\g\o\j\t\t\c\l\r\5\2\5\j\t\m\5\6\y\u\t\t\c\5\s\g\l\5\t\6\4\q\z\c\5\1\k\l\i\y\i\t\z\6\t\n\f\k\j\v\e\6\v\r\y\a\z\v\0\8\2\8\d\8\n\g\j\3\f\h\5\4\q\o\s\5\5\8\w\w\8\1 ]] 00:10:55.344 06:06:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:55.344 06:06:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:10:55.602 [2024-11-27 06:06:00.491572] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:55.602 [2024-11-27 06:06:00.491675] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60644 ] 00:10:55.602 [2024-11-27 06:06:00.645912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.860 [2024-11-27 06:06:00.718640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.860 [2024-11-27 06:06:00.778659] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:55.860  [2024-11-27T06:06:01.216Z] Copying: 512/512 [B] (average 166 kBps) 00:10:56.119 00:10:56.119 ************************************ 00:10:56.119 END TEST dd_flags_misc 00:10:56.119 ************************************ 00:10:56.119 06:06:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ m0ztqisxbphvu1q02foxn4n1z1ajjx7l6iycig6r84ovfbrzp71eaw01m68kqpttys004ojrjosea6ic9jakrzs23c32osjmcynczq1e9abbp475e63waa6y6szt953uzmrpbgwkzfcywp1o3rl07e220znmtnb19iap2ao90uynw71jy4zc1qfyochqa8bkd5jd12gcka5fjlkp5d6rwl4ys4qf9r3y7sbopygtly0bka0vs3piqoyx48pb8qpbhxxkbdz3klfnwuol3cepov6ctgtoaom991x7azr7awdot3dd0rx7xuqx06ip6t8erh6hholh1koohw6toue3dg5usl3uh49jgu2yjcpr51727qwqjesfdv0lum3kensn2ddwi36ylzogm8snf9cdur9d0cid509v2kuhh858rb7pdxqlgojttclr525jtm56yuttc5sgl5t64qzc51kliyitz6tnfkjve6vryazv0828d8ngj3fh54qos558ww81 == \m\0\z\t\q\i\s\x\b\p\h\v\u\1\q\0\2\f\o\x\n\4\n\1\z\1\a\j\j\x\7\l\6\i\y\c\i\g\6\r\8\4\o\v\f\b\r\z\p\7\1\e\a\w\0\1\m\6\8\k\q\p\t\t\y\s\0\0\4\o\j\r\j\o\s\e\a\6\i\c\9\j\a\k\r\z\s\2\3\c\3\2\o\s\j\m\c\y\n\c\z\q\1\e\9\a\b\b\p\4\7\5\e\6\3\w\a\a\6\y\6\s\z\t\9\5\3\u\z\m\r\p\b\g\w\k\z\f\c\y\w\p\1\o\3\r\l\0\7\e\2\2\0\z\n\m\t\n\b\1\9\i\a\p\2\a\o\9\0\u\y\n\w\7\1\j\y\4\z\c\1\q\f\y\o\c\h\q\a\8\b\k\d\5\j\d\1\2\g\c\k\a\5\f\j\l\k\p\5\d\6\r\w\l\4\y\s\4\q\f\9\r\3\y\7\s\b\o\p\y\g\t\l\y\0\b\k\a\0\v\s\3\p\i\q\o\y\x\4\8\p\b\8\q\p\b\h\x\x\k\b\d\z\3\k\l\f\n\w\u\o\l\3\c\e\p\o\v\6\c\t\g\t\o\a\o\m\9\9\1\x\7\a\z\r\7\a\w\d\o\t\3\d\d\0\r\x\7\x\u\q\x\0\6\i\p\6\t\8\e\r\h\6\h\h\o\l\h\1\k\o\o\h\w\6\t\o\u\e\3\d\g\5\u\s\l\3\u\h\4\9\j\g\u\2\y\j\c\p\r\5\1\7\2\7\q\w\q\j\e\s\f\d\v\0\l\u\m\3\k\e\n\s\n\2\d\d\w\i\3\6\y\l\z\o\g\m\8\s\n\f\9\c\d\u\r\9\d\0\c\i\d\5\0\9\v\2\k\u\h\h\8\5\8\r\b\7\p\d\x\q\l\g\o\j\t\t\c\l\r\5\2\5\j\t\m\5\6\y\u\t\t\c\5\s\g\l\5\t\6\4\q\z\c\5\1\k\l\i\y\i\t\z\6\t\n\f\k\j\v\e\6\v\r\y\a\z\v\0\8\2\8\d\8\n\g\j\3\f\h\5\4\q\o\s\5\5\8\w\w\8\1 ]] 00:10:56.119 00:10:56.119 real 0m4.638s 00:10:56.119 user 0m2.610s 00:10:56.119 sys 0m2.281s 00:10:56.119 06:06:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.119 06:06:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:10:56.119 06:06:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:10:56.119 06:06:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:10:56.119 * Second test run, disabling liburing, forcing AIO 00:10:56.119 06:06:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:10:56.119 06:06:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:10:56.119 06:06:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:56.119 06:06:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.119 06:06:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:56.119 ************************************ 00:10:56.119 START TEST dd_flag_append_forced_aio 00:10:56.119 ************************************ 00:10:56.119 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:10:56.119 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:10:56.119 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:10:56.119 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:10:56.119 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:56.119 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:56.119 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=o74gyti55mf6baqxeoryr4bkx9ass6ye 00:10:56.119 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:10:56.119 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:56.119 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:56.119 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=xjil92ps15xfm9ir45iozkh06x6jtako 00:10:56.119 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s o74gyti55mf6baqxeoryr4bkx9ass6ye 00:10:56.119 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s xjil92ps15xfm9ir45iozkh06x6jtako 00:10:56.119 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:10:56.119 [2024-11-27 06:06:01.143932] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:56.119 [2024-11-27 06:06:01.144229] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60678 ] 00:10:56.377 [2024-11-27 06:06:01.296993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.377 [2024-11-27 06:06:01.367042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.377 [2024-11-27 06:06:01.424621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:56.377  [2024-11-27T06:06:01.731Z] Copying: 32/32 [B] (average 31 kBps) 00:10:56.634 00:10:56.634 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ xjil92ps15xfm9ir45iozkh06x6jtakoo74gyti55mf6baqxeoryr4bkx9ass6ye == \x\j\i\l\9\2\p\s\1\5\x\f\m\9\i\r\4\5\i\o\z\k\h\0\6\x\6\j\t\a\k\o\o\7\4\g\y\t\i\5\5\m\f\6\b\a\q\x\e\o\r\y\r\4\b\k\x\9\a\s\s\6\y\e ]] 00:10:56.634 00:10:56.634 real 0m0.615s 00:10:56.634 user 0m0.329s 00:10:56.634 sys 0m0.154s 00:10:56.634 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.634 ************************************ 00:10:56.634 END TEST dd_flag_append_forced_aio 00:10:56.634 ************************************ 00:10:56.634 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:56.892 06:06:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:10:56.892 06:06:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:56.892 06:06:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.892 06:06:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:56.892 ************************************ 00:10:56.892 START TEST dd_flag_directory_forced_aio 00:10:56.892 ************************************ 00:10:56.892 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:10:56.892 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:56.892 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:10:56.892 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:56.892 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:56.892 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:56.892 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:56.892 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:56.892 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:56.892 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:56.892 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:56.892 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:56.892 06:06:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:56.892 [2024-11-27 06:06:01.826879] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:56.892 [2024-11-27 06:06:01.827016] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60699 ] 00:10:57.150 [2024-11-27 06:06:02.061653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.150 [2024-11-27 06:06:02.125351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.150 [2024-11-27 06:06:02.180842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:57.150 [2024-11-27 06:06:02.222356] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:57.150 [2024-11-27 06:06:02.223712] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:57.150 [2024-11-27 06:06:02.223740] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:57.408 [2024-11-27 06:06:02.345047] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:57.408 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:10:57.408 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:57.408 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:10:57.408 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:10:57.408 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:10:57.408 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:57.408 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:57.408 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:10:57.408 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:57.408 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:57.408 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:57.408 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:57.408 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:57.408 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:57.408 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:57.408 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:57.408 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:57.408 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:57.408 [2024-11-27 06:06:02.465241] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:57.408 [2024-11-27 06:06:02.465329] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60714 ] 00:10:57.666 [2024-11-27 06:06:02.614085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.666 [2024-11-27 06:06:02.686937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.666 [2024-11-27 06:06:02.751215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:57.925 [2024-11-27 06:06:02.794511] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:57.925 [2024-11-27 06:06:02.794586] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:57.925 [2024-11-27 06:06:02.794619] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:57.925 [2024-11-27 06:06:02.925048] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:57.925 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:10:57.925 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:57.925 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:10:57.925 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:10:57.925 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:10:57.925 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:57.925 00:10:57.925 real 0m1.256s 00:10:57.925 user 0m0.724s 00:10:57.925 sys 0m0.318s 00:10:57.925 ************************************ 00:10:57.925 END TEST dd_flag_directory_forced_aio 00:10:57.925 ************************************ 00:10:57.925 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.925 06:06:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:58.185 06:06:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:10:58.185 06:06:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:58.185 06:06:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.185 06:06:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:58.185 ************************************ 00:10:58.185 START TEST dd_flag_nofollow_forced_aio 00:10:58.185 ************************************ 00:10:58.185 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:10:58.185 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:58.185 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:58.185 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:58.185 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:58.185 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:58.185 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:10:58.185 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:58.185 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:58.185 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:58.185 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:58.185 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:58.185 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:58.185 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:58.186 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:58.186 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:58.186 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:58.186 [2024-11-27 06:06:03.118621] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:58.186 [2024-11-27 06:06:03.118721] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60748 ] 00:10:58.186 [2024-11-27 06:06:03.272272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.444 [2024-11-27 06:06:03.346413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.444 [2024-11-27 06:06:03.407439] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:58.444 [2024-11-27 06:06:03.449791] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:10:58.444 [2024-11-27 06:06:03.450106] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:10:58.444 [2024-11-27 06:06:03.450163] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:58.703 [2024-11-27 06:06:03.576636] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:58.703 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:10:58.703 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:58.703 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:10:58.703 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:10:58.703 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:10:58.703 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:58.703 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:58.703 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:10:58.703 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:58.703 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:58.703 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:58.703 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:58.703 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:58.703 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:58.703 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:58.703 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:58.703 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:58.703 06:06:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:58.703 [2024-11-27 06:06:03.710786] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:58.703 [2024-11-27 06:06:03.710904] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60752 ] 00:10:58.987 [2024-11-27 06:06:03.861246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.987 [2024-11-27 06:06:03.956090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.987 [2024-11-27 06:06:04.015686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:58.987 [2024-11-27 06:06:04.055970] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:10:58.987 [2024-11-27 06:06:04.056021] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:10:58.987 [2024-11-27 06:06:04.056042] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:59.245 [2024-11-27 06:06:04.176489] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:59.245 06:06:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:10:59.245 06:06:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:59.245 06:06:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:10:59.245 06:06:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:10:59.245 06:06:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:10:59.245 06:06:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:59.245 06:06:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:10:59.245 06:06:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:59.245 06:06:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:59.245 06:06:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:59.245 [2024-11-27 06:06:04.310013] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:10:59.245 [2024-11-27 06:06:04.310443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60765 ] 00:10:59.503 [2024-11-27 06:06:04.462823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.503 [2024-11-27 06:06:04.548283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.761 [2024-11-27 06:06:04.615321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:59.761  [2024-11-27T06:06:05.117Z] Copying: 512/512 [B] (average 500 kBps) 00:11:00.020 00:11:00.020 06:06:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ pion24d4h2q4ttq4dy89gvier4zwrx2x0q5mx1lwis39y6bg6wz4vkmo8j4ladd9lq04cvms6nxowb23dsqeeudd0gqgnfrphcitnrci2102vd8n8pt9m95v205qwdiof03q0x5vlsd066q79cqybge5lq7j2g4cbw2fnjyc2fxqeo20um7ji1mvooe98funadlqotuxs9y3he99z4jtwyg6sagwgu5f7y541d0hwc4hfabqti1ekoxw9blsn8hur97ds434id78ratgif9mluc5fnglbwwbzl2mfjtbipet955yj4y9tyfpj5685zl6v0r78ek5g3w4lcs29oyi84i8hcsmm5lkd63g313crbb9c6l9xppfvyhe8zzxuc17cfuach77fa5lbl631vtynmdjdnm7qatjost3thzjnhluv8qsu8dc2vno7f2aibpnz2tjb7s34nigp8echs5gkcx0rrai0kbwsmplunr1u1fpr5tb2fecmn9opoo8qc8q == \p\i\o\n\2\4\d\4\h\2\q\4\t\t\q\4\d\y\8\9\g\v\i\e\r\4\z\w\r\x\2\x\0\q\5\m\x\1\l\w\i\s\3\9\y\6\b\g\6\w\z\4\v\k\m\o\8\j\4\l\a\d\d\9\l\q\0\4\c\v\m\s\6\n\x\o\w\b\2\3\d\s\q\e\e\u\d\d\0\g\q\g\n\f\r\p\h\c\i\t\n\r\c\i\2\1\0\2\v\d\8\n\8\p\t\9\m\9\5\v\2\0\5\q\w\d\i\o\f\0\3\q\0\x\5\v\l\s\d\0\6\6\q\7\9\c\q\y\b\g\e\5\l\q\7\j\2\g\4\c\b\w\2\f\n\j\y\c\2\f\x\q\e\o\2\0\u\m\7\j\i\1\m\v\o\o\e\9\8\f\u\n\a\d\l\q\o\t\u\x\s\9\y\3\h\e\9\9\z\4\j\t\w\y\g\6\s\a\g\w\g\u\5\f\7\y\5\4\1\d\0\h\w\c\4\h\f\a\b\q\t\i\1\e\k\o\x\w\9\b\l\s\n\8\h\u\r\9\7\d\s\4\3\4\i\d\7\8\r\a\t\g\i\f\9\m\l\u\c\5\f\n\g\l\b\w\w\b\z\l\2\m\f\j\t\b\i\p\e\t\9\5\5\y\j\4\y\9\t\y\f\p\j\5\6\8\5\z\l\6\v\0\r\7\8\e\k\5\g\3\w\4\l\c\s\2\9\o\y\i\8\4\i\8\h\c\s\m\m\5\l\k\d\6\3\g\3\1\3\c\r\b\b\9\c\6\l\9\x\p\p\f\v\y\h\e\8\z\z\x\u\c\1\7\c\f\u\a\c\h\7\7\f\a\5\l\b\l\6\3\1\v\t\y\n\m\d\j\d\n\m\7\q\a\t\j\o\s\t\3\t\h\z\j\n\h\l\u\v\8\q\s\u\8\d\c\2\v\n\o\7\f\2\a\i\b\p\n\z\2\t\j\b\7\s\3\4\n\i\g\p\8\e\c\h\s\5\g\k\c\x\0\r\r\a\i\0\k\b\w\s\m\p\l\u\n\r\1\u\1\f\p\r\5\t\b\2\f\e\c\m\n\9\o\p\o\o\8\q\c\8\q ]] 00:11:00.020 00:11:00.020 real 0m1.863s 00:11:00.020 user 0m1.047s 00:11:00.020 sys 0m0.474s 00:11:00.020 06:06:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.020 ************************************ 00:11:00.020 END TEST dd_flag_nofollow_forced_aio 00:11:00.020 ************************************ 00:11:00.020 06:06:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:00.020 06:06:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:11:00.020 06:06:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:00.020 06:06:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.020 06:06:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:00.020 ************************************ 00:11:00.020 START TEST dd_flag_noatime_forced_aio 00:11:00.020 ************************************ 00:11:00.020 06:06:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:11:00.020 06:06:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:11:00.020 06:06:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:11:00.020 06:06:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:11:00.020 06:06:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:00.020 06:06:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:00.020 06:06:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:00.020 06:06:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732687564 00:11:00.021 06:06:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:00.021 06:06:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732687564 00:11:00.021 06:06:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:11:00.955 06:06:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:00.955 [2024-11-27 06:06:06.033152] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:00.955 [2024-11-27 06:06:06.033260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60800 ] 00:11:01.213 [2024-11-27 06:06:06.180219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.213 [2024-11-27 06:06:06.263666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.472 [2024-11-27 06:06:06.328904] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:01.472  [2024-11-27T06:06:06.827Z] Copying: 512/512 [B] (average 500 kBps) 00:11:01.730 00:11:01.730 06:06:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:01.730 06:06:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732687564 )) 00:11:01.730 06:06:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:01.730 06:06:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732687564 )) 00:11:01.730 06:06:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:01.730 [2024-11-27 06:06:06.691784] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:01.730 [2024-11-27 06:06:06.691921] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60817 ] 00:11:01.988 [2024-11-27 06:06:06.835794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.989 [2024-11-27 06:06:06.904822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.989 [2024-11-27 06:06:06.960233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:01.989  [2024-11-27T06:06:07.343Z] Copying: 512/512 [B] (average 500 kBps) 00:11:02.246 00:11:02.246 06:06:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:02.246 06:06:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732687567 )) 00:11:02.246 00:11:02.246 real 0m2.257s 00:11:02.246 user 0m0.693s 00:11:02.246 sys 0m0.316s 00:11:02.246 06:06:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.246 06:06:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:02.246 ************************************ 00:11:02.246 END TEST dd_flag_noatime_forced_aio 00:11:02.246 ************************************ 00:11:02.246 06:06:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:11:02.246 06:06:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:02.246 06:06:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.246 06:06:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:02.246 ************************************ 00:11:02.246 START TEST dd_flags_misc_forced_aio 00:11:02.246 ************************************ 00:11:02.246 06:06:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:11:02.246 06:06:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:11:02.246 06:06:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:11:02.246 06:06:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:11:02.246 06:06:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:11:02.246 06:06:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:11:02.246 06:06:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:02.246 06:06:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:02.246 06:06:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:02.246 06:06:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:11:02.246 [2024-11-27 06:06:07.314997] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:02.246 [2024-11-27 06:06:07.315545] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60849 ] 00:11:02.504 [2024-11-27 06:06:07.465803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.504 [2024-11-27 06:06:07.536533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.504 [2024-11-27 06:06:07.594868] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:02.762  [2024-11-27T06:06:07.859Z] Copying: 512/512 [B] (average 500 kBps) 00:11:02.762 00:11:02.762 06:06:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ bpna2gk4vksd3vkr2awee3adas66bho4a7db45ilbkmg9hnhzud20lfao130lqvf7hhycqi0d7aljitc3qq6n6w9qi17hussep2w1iketdn4jce9xxj9iqauejxgrbknd0guakx0oygv8f40ougfjn9j9vdejdtto7py6jc2bwcmqsv5ci9bhs7gcxi383pmqtx2ufje1jxlafa897cyw4gvli7gs3ap656f2uwwbiuvl4qsgcprmzocsq0tdrgoivyu4j1liflyuijoqiaezpn05t4cfwvrpzxh1t1a029gdp2ih257x2g89vj4fk1eigq71811g3aucjg3dthn7lf4d2wmb78v81mkiauden60upwjuqev2nrzwjedg6e7tethvlzli78io3j49a20k28jh8oninos05tyxjs7nx7kd5bdzae2rimevxdbjtrzw0ikgr5psmeebiy38sx4tolpmk8u2dk9myn908xxwm25kzyskgk2775323dyq2n5 == \b\p\n\a\2\g\k\4\v\k\s\d\3\v\k\r\2\a\w\e\e\3\a\d\a\s\6\6\b\h\o\4\a\7\d\b\4\5\i\l\b\k\m\g\9\h\n\h\z\u\d\2\0\l\f\a\o\1\3\0\l\q\v\f\7\h\h\y\c\q\i\0\d\7\a\l\j\i\t\c\3\q\q\6\n\6\w\9\q\i\1\7\h\u\s\s\e\p\2\w\1\i\k\e\t\d\n\4\j\c\e\9\x\x\j\9\i\q\a\u\e\j\x\g\r\b\k\n\d\0\g\u\a\k\x\0\o\y\g\v\8\f\4\0\o\u\g\f\j\n\9\j\9\v\d\e\j\d\t\t\o\7\p\y\6\j\c\2\b\w\c\m\q\s\v\5\c\i\9\b\h\s\7\g\c\x\i\3\8\3\p\m\q\t\x\2\u\f\j\e\1\j\x\l\a\f\a\8\9\7\c\y\w\4\g\v\l\i\7\g\s\3\a\p\6\5\6\f\2\u\w\w\b\i\u\v\l\4\q\s\g\c\p\r\m\z\o\c\s\q\0\t\d\r\g\o\i\v\y\u\4\j\1\l\i\f\l\y\u\i\j\o\q\i\a\e\z\p\n\0\5\t\4\c\f\w\v\r\p\z\x\h\1\t\1\a\0\2\9\g\d\p\2\i\h\2\5\7\x\2\g\8\9\v\j\4\f\k\1\e\i\g\q\7\1\8\1\1\g\3\a\u\c\j\g\3\d\t\h\n\7\l\f\4\d\2\w\m\b\7\8\v\8\1\m\k\i\a\u\d\e\n\6\0\u\p\w\j\u\q\e\v\2\n\r\z\w\j\e\d\g\6\e\7\t\e\t\h\v\l\z\l\i\7\8\i\o\3\j\4\9\a\2\0\k\2\8\j\h\8\o\n\i\n\o\s\0\5\t\y\x\j\s\7\n\x\7\k\d\5\b\d\z\a\e\2\r\i\m\e\v\x\d\b\j\t\r\z\w\0\i\k\g\r\5\p\s\m\e\e\b\i\y\3\8\s\x\4\t\o\l\p\m\k\8\u\2\d\k\9\m\y\n\9\0\8\x\x\w\m\2\5\k\z\y\s\k\g\k\2\7\7\5\3\2\3\d\y\q\2\n\5 ]] 00:11:02.762 06:06:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:02.762 06:06:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:11:03.089 [2024-11-27 06:06:07.907763] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:03.090 [2024-11-27 06:06:07.908355] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60851 ] 00:11:03.090 [2024-11-27 06:06:08.059857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.090 [2024-11-27 06:06:08.127289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.351 [2024-11-27 06:06:08.184150] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:03.351  [2024-11-27T06:06:08.448Z] Copying: 512/512 [B] (average 500 kBps) 00:11:03.351 00:11:03.610 06:06:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ bpna2gk4vksd3vkr2awee3adas66bho4a7db45ilbkmg9hnhzud20lfao130lqvf7hhycqi0d7aljitc3qq6n6w9qi17hussep2w1iketdn4jce9xxj9iqauejxgrbknd0guakx0oygv8f40ougfjn9j9vdejdtto7py6jc2bwcmqsv5ci9bhs7gcxi383pmqtx2ufje1jxlafa897cyw4gvli7gs3ap656f2uwwbiuvl4qsgcprmzocsq0tdrgoivyu4j1liflyuijoqiaezpn05t4cfwvrpzxh1t1a029gdp2ih257x2g89vj4fk1eigq71811g3aucjg3dthn7lf4d2wmb78v81mkiauden60upwjuqev2nrzwjedg6e7tethvlzli78io3j49a20k28jh8oninos05tyxjs7nx7kd5bdzae2rimevxdbjtrzw0ikgr5psmeebiy38sx4tolpmk8u2dk9myn908xxwm25kzyskgk2775323dyq2n5 == \b\p\n\a\2\g\k\4\v\k\s\d\3\v\k\r\2\a\w\e\e\3\a\d\a\s\6\6\b\h\o\4\a\7\d\b\4\5\i\l\b\k\m\g\9\h\n\h\z\u\d\2\0\l\f\a\o\1\3\0\l\q\v\f\7\h\h\y\c\q\i\0\d\7\a\l\j\i\t\c\3\q\q\6\n\6\w\9\q\i\1\7\h\u\s\s\e\p\2\w\1\i\k\e\t\d\n\4\j\c\e\9\x\x\j\9\i\q\a\u\e\j\x\g\r\b\k\n\d\0\g\u\a\k\x\0\o\y\g\v\8\f\4\0\o\u\g\f\j\n\9\j\9\v\d\e\j\d\t\t\o\7\p\y\6\j\c\2\b\w\c\m\q\s\v\5\c\i\9\b\h\s\7\g\c\x\i\3\8\3\p\m\q\t\x\2\u\f\j\e\1\j\x\l\a\f\a\8\9\7\c\y\w\4\g\v\l\i\7\g\s\3\a\p\6\5\6\f\2\u\w\w\b\i\u\v\l\4\q\s\g\c\p\r\m\z\o\c\s\q\0\t\d\r\g\o\i\v\y\u\4\j\1\l\i\f\l\y\u\i\j\o\q\i\a\e\z\p\n\0\5\t\4\c\f\w\v\r\p\z\x\h\1\t\1\a\0\2\9\g\d\p\2\i\h\2\5\7\x\2\g\8\9\v\j\4\f\k\1\e\i\g\q\7\1\8\1\1\g\3\a\u\c\j\g\3\d\t\h\n\7\l\f\4\d\2\w\m\b\7\8\v\8\1\m\k\i\a\u\d\e\n\6\0\u\p\w\j\u\q\e\v\2\n\r\z\w\j\e\d\g\6\e\7\t\e\t\h\v\l\z\l\i\7\8\i\o\3\j\4\9\a\2\0\k\2\8\j\h\8\o\n\i\n\o\s\0\5\t\y\x\j\s\7\n\x\7\k\d\5\b\d\z\a\e\2\r\i\m\e\v\x\d\b\j\t\r\z\w\0\i\k\g\r\5\p\s\m\e\e\b\i\y\3\8\s\x\4\t\o\l\p\m\k\8\u\2\d\k\9\m\y\n\9\0\8\x\x\w\m\2\5\k\z\y\s\k\g\k\2\7\7\5\3\2\3\d\y\q\2\n\5 ]] 00:11:03.610 06:06:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:03.610 06:06:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:11:03.610 [2024-11-27 06:06:08.492050] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:03.610 [2024-11-27 06:06:08.492158] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60864 ] 00:11:03.610 [2024-11-27 06:06:08.640040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.868 [2024-11-27 06:06:08.714999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.868 [2024-11-27 06:06:08.774680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:03.868  [2024-11-27T06:06:09.225Z] Copying: 512/512 [B] (average 71 kBps) 00:11:04.128 00:11:04.128 06:06:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ bpna2gk4vksd3vkr2awee3adas66bho4a7db45ilbkmg9hnhzud20lfao130lqvf7hhycqi0d7aljitc3qq6n6w9qi17hussep2w1iketdn4jce9xxj9iqauejxgrbknd0guakx0oygv8f40ougfjn9j9vdejdtto7py6jc2bwcmqsv5ci9bhs7gcxi383pmqtx2ufje1jxlafa897cyw4gvli7gs3ap656f2uwwbiuvl4qsgcprmzocsq0tdrgoivyu4j1liflyuijoqiaezpn05t4cfwvrpzxh1t1a029gdp2ih257x2g89vj4fk1eigq71811g3aucjg3dthn7lf4d2wmb78v81mkiauden60upwjuqev2nrzwjedg6e7tethvlzli78io3j49a20k28jh8oninos05tyxjs7nx7kd5bdzae2rimevxdbjtrzw0ikgr5psmeebiy38sx4tolpmk8u2dk9myn908xxwm25kzyskgk2775323dyq2n5 == \b\p\n\a\2\g\k\4\v\k\s\d\3\v\k\r\2\a\w\e\e\3\a\d\a\s\6\6\b\h\o\4\a\7\d\b\4\5\i\l\b\k\m\g\9\h\n\h\z\u\d\2\0\l\f\a\o\1\3\0\l\q\v\f\7\h\h\y\c\q\i\0\d\7\a\l\j\i\t\c\3\q\q\6\n\6\w\9\q\i\1\7\h\u\s\s\e\p\2\w\1\i\k\e\t\d\n\4\j\c\e\9\x\x\j\9\i\q\a\u\e\j\x\g\r\b\k\n\d\0\g\u\a\k\x\0\o\y\g\v\8\f\4\0\o\u\g\f\j\n\9\j\9\v\d\e\j\d\t\t\o\7\p\y\6\j\c\2\b\w\c\m\q\s\v\5\c\i\9\b\h\s\7\g\c\x\i\3\8\3\p\m\q\t\x\2\u\f\j\e\1\j\x\l\a\f\a\8\9\7\c\y\w\4\g\v\l\i\7\g\s\3\a\p\6\5\6\f\2\u\w\w\b\i\u\v\l\4\q\s\g\c\p\r\m\z\o\c\s\q\0\t\d\r\g\o\i\v\y\u\4\j\1\l\i\f\l\y\u\i\j\o\q\i\a\e\z\p\n\0\5\t\4\c\f\w\v\r\p\z\x\h\1\t\1\a\0\2\9\g\d\p\2\i\h\2\5\7\x\2\g\8\9\v\j\4\f\k\1\e\i\g\q\7\1\8\1\1\g\3\a\u\c\j\g\3\d\t\h\n\7\l\f\4\d\2\w\m\b\7\8\v\8\1\m\k\i\a\u\d\e\n\6\0\u\p\w\j\u\q\e\v\2\n\r\z\w\j\e\d\g\6\e\7\t\e\t\h\v\l\z\l\i\7\8\i\o\3\j\4\9\a\2\0\k\2\8\j\h\8\o\n\i\n\o\s\0\5\t\y\x\j\s\7\n\x\7\k\d\5\b\d\z\a\e\2\r\i\m\e\v\x\d\b\j\t\r\z\w\0\i\k\g\r\5\p\s\m\e\e\b\i\y\3\8\s\x\4\t\o\l\p\m\k\8\u\2\d\k\9\m\y\n\9\0\8\x\x\w\m\2\5\k\z\y\s\k\g\k\2\7\7\5\3\2\3\d\y\q\2\n\5 ]] 00:11:04.128 06:06:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:04.128 06:06:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:11:04.128 [2024-11-27 06:06:09.108022] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:04.128 [2024-11-27 06:06:09.108156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60866 ] 00:11:04.386 [2024-11-27 06:06:09.265924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.387 [2024-11-27 06:06:09.340033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.387 [2024-11-27 06:06:09.399102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:04.387  [2024-11-27T06:06:09.743Z] Copying: 512/512 [B] (average 125 kBps) 00:11:04.646 00:11:04.646 06:06:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ bpna2gk4vksd3vkr2awee3adas66bho4a7db45ilbkmg9hnhzud20lfao130lqvf7hhycqi0d7aljitc3qq6n6w9qi17hussep2w1iketdn4jce9xxj9iqauejxgrbknd0guakx0oygv8f40ougfjn9j9vdejdtto7py6jc2bwcmqsv5ci9bhs7gcxi383pmqtx2ufje1jxlafa897cyw4gvli7gs3ap656f2uwwbiuvl4qsgcprmzocsq0tdrgoivyu4j1liflyuijoqiaezpn05t4cfwvrpzxh1t1a029gdp2ih257x2g89vj4fk1eigq71811g3aucjg3dthn7lf4d2wmb78v81mkiauden60upwjuqev2nrzwjedg6e7tethvlzli78io3j49a20k28jh8oninos05tyxjs7nx7kd5bdzae2rimevxdbjtrzw0ikgr5psmeebiy38sx4tolpmk8u2dk9myn908xxwm25kzyskgk2775323dyq2n5 == \b\p\n\a\2\g\k\4\v\k\s\d\3\v\k\r\2\a\w\e\e\3\a\d\a\s\6\6\b\h\o\4\a\7\d\b\4\5\i\l\b\k\m\g\9\h\n\h\z\u\d\2\0\l\f\a\o\1\3\0\l\q\v\f\7\h\h\y\c\q\i\0\d\7\a\l\j\i\t\c\3\q\q\6\n\6\w\9\q\i\1\7\h\u\s\s\e\p\2\w\1\i\k\e\t\d\n\4\j\c\e\9\x\x\j\9\i\q\a\u\e\j\x\g\r\b\k\n\d\0\g\u\a\k\x\0\o\y\g\v\8\f\4\0\o\u\g\f\j\n\9\j\9\v\d\e\j\d\t\t\o\7\p\y\6\j\c\2\b\w\c\m\q\s\v\5\c\i\9\b\h\s\7\g\c\x\i\3\8\3\p\m\q\t\x\2\u\f\j\e\1\j\x\l\a\f\a\8\9\7\c\y\w\4\g\v\l\i\7\g\s\3\a\p\6\5\6\f\2\u\w\w\b\i\u\v\l\4\q\s\g\c\p\r\m\z\o\c\s\q\0\t\d\r\g\o\i\v\y\u\4\j\1\l\i\f\l\y\u\i\j\o\q\i\a\e\z\p\n\0\5\t\4\c\f\w\v\r\p\z\x\h\1\t\1\a\0\2\9\g\d\p\2\i\h\2\5\7\x\2\g\8\9\v\j\4\f\k\1\e\i\g\q\7\1\8\1\1\g\3\a\u\c\j\g\3\d\t\h\n\7\l\f\4\d\2\w\m\b\7\8\v\8\1\m\k\i\a\u\d\e\n\6\0\u\p\w\j\u\q\e\v\2\n\r\z\w\j\e\d\g\6\e\7\t\e\t\h\v\l\z\l\i\7\8\i\o\3\j\4\9\a\2\0\k\2\8\j\h\8\o\n\i\n\o\s\0\5\t\y\x\j\s\7\n\x\7\k\d\5\b\d\z\a\e\2\r\i\m\e\v\x\d\b\j\t\r\z\w\0\i\k\g\r\5\p\s\m\e\e\b\i\y\3\8\s\x\4\t\o\l\p\m\k\8\u\2\d\k\9\m\y\n\9\0\8\x\x\w\m\2\5\k\z\y\s\k\g\k\2\7\7\5\3\2\3\d\y\q\2\n\5 ]] 00:11:04.646 06:06:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:11:04.646 06:06:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:11:04.646 06:06:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:04.646 06:06:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:04.646 06:06:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:04.646 06:06:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:11:04.904 [2024-11-27 06:06:09.745121] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:04.904 [2024-11-27 06:06:09.745551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60879 ] 00:11:04.904 [2024-11-27 06:06:09.911370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.162 [2024-11-27 06:06:09.999815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.162 [2024-11-27 06:06:10.058688] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:05.162  [2024-11-27T06:06:10.517Z] Copying: 512/512 [B] (average 500 kBps) 00:11:05.420 00:11:05.421 06:06:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 406ykrixgu4gyxpqzq9ffboh3tp0hy8977k1n1q40xn1mqszeui2c3sns2dto6316smopdf4i3x39fncm4az6i1xtzco3i5jnjgjuqyl7h31aw6aqhmwfeh66nmgoq1a6lbdchzas2rlr0d23mde6dlo34qrfn38bz8x9fvfmlbw5xdebesn3sfk7k3qupq334zedth8r1smrbvfbyopwtwgy4lbqhi0x9km3byvry9wbksrtrhc9ly64iyw99qrlx693lqatfn9a0x0gjtak9u9jc1pq501a58i2ll0mru756y99c4hk3reuisrn0mdjtnmryzgwt3gc2oo90aod5chkel0eu0ldo6re2fxp954nu3493nd5ra0opwoi7l2aeu2tfjci99dx5aqoa357gckohnlkpzomu94zkoeh2rj1q0kearpkg2o9gt9mpip0p2bpqor6orakqigfexz3scu5wmldc7eqtt7ljc1m8373yl18bmx83sadj6ilp8z == \4\0\6\y\k\r\i\x\g\u\4\g\y\x\p\q\z\q\9\f\f\b\o\h\3\t\p\0\h\y\8\9\7\7\k\1\n\1\q\4\0\x\n\1\m\q\s\z\e\u\i\2\c\3\s\n\s\2\d\t\o\6\3\1\6\s\m\o\p\d\f\4\i\3\x\3\9\f\n\c\m\4\a\z\6\i\1\x\t\z\c\o\3\i\5\j\n\j\g\j\u\q\y\l\7\h\3\1\a\w\6\a\q\h\m\w\f\e\h\6\6\n\m\g\o\q\1\a\6\l\b\d\c\h\z\a\s\2\r\l\r\0\d\2\3\m\d\e\6\d\l\o\3\4\q\r\f\n\3\8\b\z\8\x\9\f\v\f\m\l\b\w\5\x\d\e\b\e\s\n\3\s\f\k\7\k\3\q\u\p\q\3\3\4\z\e\d\t\h\8\r\1\s\m\r\b\v\f\b\y\o\p\w\t\w\g\y\4\l\b\q\h\i\0\x\9\k\m\3\b\y\v\r\y\9\w\b\k\s\r\t\r\h\c\9\l\y\6\4\i\y\w\9\9\q\r\l\x\6\9\3\l\q\a\t\f\n\9\a\0\x\0\g\j\t\a\k\9\u\9\j\c\1\p\q\5\0\1\a\5\8\i\2\l\l\0\m\r\u\7\5\6\y\9\9\c\4\h\k\3\r\e\u\i\s\r\n\0\m\d\j\t\n\m\r\y\z\g\w\t\3\g\c\2\o\o\9\0\a\o\d\5\c\h\k\e\l\0\e\u\0\l\d\o\6\r\e\2\f\x\p\9\5\4\n\u\3\4\9\3\n\d\5\r\a\0\o\p\w\o\i\7\l\2\a\e\u\2\t\f\j\c\i\9\9\d\x\5\a\q\o\a\3\5\7\g\c\k\o\h\n\l\k\p\z\o\m\u\9\4\z\k\o\e\h\2\r\j\1\q\0\k\e\a\r\p\k\g\2\o\9\g\t\9\m\p\i\p\0\p\2\b\p\q\o\r\6\o\r\a\k\q\i\g\f\e\x\z\3\s\c\u\5\w\m\l\d\c\7\e\q\t\t\7\l\j\c\1\m\8\3\7\3\y\l\1\8\b\m\x\8\3\s\a\d\j\6\i\l\p\8\z ]] 00:11:05.421 06:06:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:05.421 06:06:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:11:05.421 [2024-11-27 06:06:10.404483] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:05.421 [2024-11-27 06:06:10.404613] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60887 ] 00:11:05.680 [2024-11-27 06:06:10.558468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.680 [2024-11-27 06:06:10.633643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.680 [2024-11-27 06:06:10.694327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:05.680  [2024-11-27T06:06:11.035Z] Copying: 512/512 [B] (average 500 kBps) 00:11:05.938 00:11:05.938 06:06:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 406ykrixgu4gyxpqzq9ffboh3tp0hy8977k1n1q40xn1mqszeui2c3sns2dto6316smopdf4i3x39fncm4az6i1xtzco3i5jnjgjuqyl7h31aw6aqhmwfeh66nmgoq1a6lbdchzas2rlr0d23mde6dlo34qrfn38bz8x9fvfmlbw5xdebesn3sfk7k3qupq334zedth8r1smrbvfbyopwtwgy4lbqhi0x9km3byvry9wbksrtrhc9ly64iyw99qrlx693lqatfn9a0x0gjtak9u9jc1pq501a58i2ll0mru756y99c4hk3reuisrn0mdjtnmryzgwt3gc2oo90aod5chkel0eu0ldo6re2fxp954nu3493nd5ra0opwoi7l2aeu2tfjci99dx5aqoa357gckohnlkpzomu94zkoeh2rj1q0kearpkg2o9gt9mpip0p2bpqor6orakqigfexz3scu5wmldc7eqtt7ljc1m8373yl18bmx83sadj6ilp8z == \4\0\6\y\k\r\i\x\g\u\4\g\y\x\p\q\z\q\9\f\f\b\o\h\3\t\p\0\h\y\8\9\7\7\k\1\n\1\q\4\0\x\n\1\m\q\s\z\e\u\i\2\c\3\s\n\s\2\d\t\o\6\3\1\6\s\m\o\p\d\f\4\i\3\x\3\9\f\n\c\m\4\a\z\6\i\1\x\t\z\c\o\3\i\5\j\n\j\g\j\u\q\y\l\7\h\3\1\a\w\6\a\q\h\m\w\f\e\h\6\6\n\m\g\o\q\1\a\6\l\b\d\c\h\z\a\s\2\r\l\r\0\d\2\3\m\d\e\6\d\l\o\3\4\q\r\f\n\3\8\b\z\8\x\9\f\v\f\m\l\b\w\5\x\d\e\b\e\s\n\3\s\f\k\7\k\3\q\u\p\q\3\3\4\z\e\d\t\h\8\r\1\s\m\r\b\v\f\b\y\o\p\w\t\w\g\y\4\l\b\q\h\i\0\x\9\k\m\3\b\y\v\r\y\9\w\b\k\s\r\t\r\h\c\9\l\y\6\4\i\y\w\9\9\q\r\l\x\6\9\3\l\q\a\t\f\n\9\a\0\x\0\g\j\t\a\k\9\u\9\j\c\1\p\q\5\0\1\a\5\8\i\2\l\l\0\m\r\u\7\5\6\y\9\9\c\4\h\k\3\r\e\u\i\s\r\n\0\m\d\j\t\n\m\r\y\z\g\w\t\3\g\c\2\o\o\9\0\a\o\d\5\c\h\k\e\l\0\e\u\0\l\d\o\6\r\e\2\f\x\p\9\5\4\n\u\3\4\9\3\n\d\5\r\a\0\o\p\w\o\i\7\l\2\a\e\u\2\t\f\j\c\i\9\9\d\x\5\a\q\o\a\3\5\7\g\c\k\o\h\n\l\k\p\z\o\m\u\9\4\z\k\o\e\h\2\r\j\1\q\0\k\e\a\r\p\k\g\2\o\9\g\t\9\m\p\i\p\0\p\2\b\p\q\o\r\6\o\r\a\k\q\i\g\f\e\x\z\3\s\c\u\5\w\m\l\d\c\7\e\q\t\t\7\l\j\c\1\m\8\3\7\3\y\l\1\8\b\m\x\8\3\s\a\d\j\6\i\l\p\8\z ]] 00:11:05.938 06:06:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:05.938 06:06:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:11:05.938 [2024-11-27 06:06:11.029734] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:05.938 [2024-11-27 06:06:11.029854] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60894 ] 00:11:06.196 [2024-11-27 06:06:11.183120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.196 [2024-11-27 06:06:11.255054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.457 [2024-11-27 06:06:11.312925] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:06.457  [2024-11-27T06:06:11.812Z] Copying: 512/512 [B] (average 166 kBps) 00:11:06.715 00:11:06.715 06:06:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 406ykrixgu4gyxpqzq9ffboh3tp0hy8977k1n1q40xn1mqszeui2c3sns2dto6316smopdf4i3x39fncm4az6i1xtzco3i5jnjgjuqyl7h31aw6aqhmwfeh66nmgoq1a6lbdchzas2rlr0d23mde6dlo34qrfn38bz8x9fvfmlbw5xdebesn3sfk7k3qupq334zedth8r1smrbvfbyopwtwgy4lbqhi0x9km3byvry9wbksrtrhc9ly64iyw99qrlx693lqatfn9a0x0gjtak9u9jc1pq501a58i2ll0mru756y99c4hk3reuisrn0mdjtnmryzgwt3gc2oo90aod5chkel0eu0ldo6re2fxp954nu3493nd5ra0opwoi7l2aeu2tfjci99dx5aqoa357gckohnlkpzomu94zkoeh2rj1q0kearpkg2o9gt9mpip0p2bpqor6orakqigfexz3scu5wmldc7eqtt7ljc1m8373yl18bmx83sadj6ilp8z == \4\0\6\y\k\r\i\x\g\u\4\g\y\x\p\q\z\q\9\f\f\b\o\h\3\t\p\0\h\y\8\9\7\7\k\1\n\1\q\4\0\x\n\1\m\q\s\z\e\u\i\2\c\3\s\n\s\2\d\t\o\6\3\1\6\s\m\o\p\d\f\4\i\3\x\3\9\f\n\c\m\4\a\z\6\i\1\x\t\z\c\o\3\i\5\j\n\j\g\j\u\q\y\l\7\h\3\1\a\w\6\a\q\h\m\w\f\e\h\6\6\n\m\g\o\q\1\a\6\l\b\d\c\h\z\a\s\2\r\l\r\0\d\2\3\m\d\e\6\d\l\o\3\4\q\r\f\n\3\8\b\z\8\x\9\f\v\f\m\l\b\w\5\x\d\e\b\e\s\n\3\s\f\k\7\k\3\q\u\p\q\3\3\4\z\e\d\t\h\8\r\1\s\m\r\b\v\f\b\y\o\p\w\t\w\g\y\4\l\b\q\h\i\0\x\9\k\m\3\b\y\v\r\y\9\w\b\k\s\r\t\r\h\c\9\l\y\6\4\i\y\w\9\9\q\r\l\x\6\9\3\l\q\a\t\f\n\9\a\0\x\0\g\j\t\a\k\9\u\9\j\c\1\p\q\5\0\1\a\5\8\i\2\l\l\0\m\r\u\7\5\6\y\9\9\c\4\h\k\3\r\e\u\i\s\r\n\0\m\d\j\t\n\m\r\y\z\g\w\t\3\g\c\2\o\o\9\0\a\o\d\5\c\h\k\e\l\0\e\u\0\l\d\o\6\r\e\2\f\x\p\9\5\4\n\u\3\4\9\3\n\d\5\r\a\0\o\p\w\o\i\7\l\2\a\e\u\2\t\f\j\c\i\9\9\d\x\5\a\q\o\a\3\5\7\g\c\k\o\h\n\l\k\p\z\o\m\u\9\4\z\k\o\e\h\2\r\j\1\q\0\k\e\a\r\p\k\g\2\o\9\g\t\9\m\p\i\p\0\p\2\b\p\q\o\r\6\o\r\a\k\q\i\g\f\e\x\z\3\s\c\u\5\w\m\l\d\c\7\e\q\t\t\7\l\j\c\1\m\8\3\7\3\y\l\1\8\b\m\x\8\3\s\a\d\j\6\i\l\p\8\z ]] 00:11:06.715 06:06:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:06.715 06:06:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:11:06.715 [2024-11-27 06:06:11.642580] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:06.715 [2024-11-27 06:06:11.642714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60907 ] 00:11:06.715 [2024-11-27 06:06:11.793703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.973 [2024-11-27 06:06:11.865470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.973 [2024-11-27 06:06:11.922521] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:06.973  [2024-11-27T06:06:12.328Z] Copying: 512/512 [B] (average 11 kBps) 00:11:07.231 00:11:07.231 ************************************ 00:11:07.231 END TEST dd_flags_misc_forced_aio 00:11:07.231 ************************************ 00:11:07.231 06:06:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 406ykrixgu4gyxpqzq9ffboh3tp0hy8977k1n1q40xn1mqszeui2c3sns2dto6316smopdf4i3x39fncm4az6i1xtzco3i5jnjgjuqyl7h31aw6aqhmwfeh66nmgoq1a6lbdchzas2rlr0d23mde6dlo34qrfn38bz8x9fvfmlbw5xdebesn3sfk7k3qupq334zedth8r1smrbvfbyopwtwgy4lbqhi0x9km3byvry9wbksrtrhc9ly64iyw99qrlx693lqatfn9a0x0gjtak9u9jc1pq501a58i2ll0mru756y99c4hk3reuisrn0mdjtnmryzgwt3gc2oo90aod5chkel0eu0ldo6re2fxp954nu3493nd5ra0opwoi7l2aeu2tfjci99dx5aqoa357gckohnlkpzomu94zkoeh2rj1q0kearpkg2o9gt9mpip0p2bpqor6orakqigfexz3scu5wmldc7eqtt7ljc1m8373yl18bmx83sadj6ilp8z == \4\0\6\y\k\r\i\x\g\u\4\g\y\x\p\q\z\q\9\f\f\b\o\h\3\t\p\0\h\y\8\9\7\7\k\1\n\1\q\4\0\x\n\1\m\q\s\z\e\u\i\2\c\3\s\n\s\2\d\t\o\6\3\1\6\s\m\o\p\d\f\4\i\3\x\3\9\f\n\c\m\4\a\z\6\i\1\x\t\z\c\o\3\i\5\j\n\j\g\j\u\q\y\l\7\h\3\1\a\w\6\a\q\h\m\w\f\e\h\6\6\n\m\g\o\q\1\a\6\l\b\d\c\h\z\a\s\2\r\l\r\0\d\2\3\m\d\e\6\d\l\o\3\4\q\r\f\n\3\8\b\z\8\x\9\f\v\f\m\l\b\w\5\x\d\e\b\e\s\n\3\s\f\k\7\k\3\q\u\p\q\3\3\4\z\e\d\t\h\8\r\1\s\m\r\b\v\f\b\y\o\p\w\t\w\g\y\4\l\b\q\h\i\0\x\9\k\m\3\b\y\v\r\y\9\w\b\k\s\r\t\r\h\c\9\l\y\6\4\i\y\w\9\9\q\r\l\x\6\9\3\l\q\a\t\f\n\9\a\0\x\0\g\j\t\a\k\9\u\9\j\c\1\p\q\5\0\1\a\5\8\i\2\l\l\0\m\r\u\7\5\6\y\9\9\c\4\h\k\3\r\e\u\i\s\r\n\0\m\d\j\t\n\m\r\y\z\g\w\t\3\g\c\2\o\o\9\0\a\o\d\5\c\h\k\e\l\0\e\u\0\l\d\o\6\r\e\2\f\x\p\9\5\4\n\u\3\4\9\3\n\d\5\r\a\0\o\p\w\o\i\7\l\2\a\e\u\2\t\f\j\c\i\9\9\d\x\5\a\q\o\a\3\5\7\g\c\k\o\h\n\l\k\p\z\o\m\u\9\4\z\k\o\e\h\2\r\j\1\q\0\k\e\a\r\p\k\g\2\o\9\g\t\9\m\p\i\p\0\p\2\b\p\q\o\r\6\o\r\a\k\q\i\g\f\e\x\z\3\s\c\u\5\w\m\l\d\c\7\e\q\t\t\7\l\j\c\1\m\8\3\7\3\y\l\1\8\b\m\x\8\3\s\a\d\j\6\i\l\p\8\z ]] 00:11:07.231 00:11:07.231 real 0m4.950s 00:11:07.231 user 0m2.718s 00:11:07.231 sys 0m1.193s 00:11:07.231 06:06:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.231 06:06:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:07.231 06:06:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:11:07.231 06:06:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:07.231 06:06:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:07.231 ************************************ 00:11:07.231 END TEST spdk_dd_posix 00:11:07.231 ************************************ 00:11:07.231 00:11:07.231 real 0m21.945s 00:11:07.231 user 0m11.031s 00:11:07.231 sys 0m6.847s 00:11:07.231 06:06:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.231 06:06:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:07.231 06:06:12 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:11:07.231 06:06:12 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:07.231 06:06:12 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.231 06:06:12 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:07.231 ************************************ 00:11:07.231 START TEST spdk_dd_malloc 00:11:07.231 ************************************ 00:11:07.231 06:06:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:11:07.491 * Looking for test storage... 00:11:07.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:07.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.491 --rc genhtml_branch_coverage=1 00:11:07.491 --rc genhtml_function_coverage=1 00:11:07.491 --rc genhtml_legend=1 00:11:07.491 --rc geninfo_all_blocks=1 00:11:07.491 --rc geninfo_unexecuted_blocks=1 00:11:07.491 00:11:07.491 ' 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:07.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.491 --rc genhtml_branch_coverage=1 00:11:07.491 --rc genhtml_function_coverage=1 00:11:07.491 --rc genhtml_legend=1 00:11:07.491 --rc geninfo_all_blocks=1 00:11:07.491 --rc geninfo_unexecuted_blocks=1 00:11:07.491 00:11:07.491 ' 00:11:07.491 06:06:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:07.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.491 --rc genhtml_branch_coverage=1 00:11:07.492 --rc genhtml_function_coverage=1 00:11:07.492 --rc genhtml_legend=1 00:11:07.492 --rc geninfo_all_blocks=1 00:11:07.492 --rc geninfo_unexecuted_blocks=1 00:11:07.492 00:11:07.492 ' 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:07.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.492 --rc genhtml_branch_coverage=1 00:11:07.492 --rc genhtml_function_coverage=1 00:11:07.492 --rc genhtml_legend=1 00:11:07.492 --rc geninfo_all_blocks=1 00:11:07.492 --rc geninfo_unexecuted_blocks=1 00:11:07.492 00:11:07.492 ' 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:11:07.492 ************************************ 00:11:07.492 START TEST dd_malloc_copy 00:11:07.492 ************************************ 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:07.492 06:06:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:11:07.750 [2024-11-27 06:06:12.587967] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:07.750 [2024-11-27 06:06:12.588077] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60989 ] 00:11:07.750 { 00:11:07.750 "subsystems": [ 00:11:07.750 { 00:11:07.750 "subsystem": "bdev", 00:11:07.750 "config": [ 00:11:07.750 { 00:11:07.750 "params": { 00:11:07.750 "block_size": 512, 00:11:07.750 "num_blocks": 1048576, 00:11:07.750 "name": "malloc0" 00:11:07.750 }, 00:11:07.750 "method": "bdev_malloc_create" 00:11:07.750 }, 00:11:07.750 { 00:11:07.750 "params": { 00:11:07.750 "block_size": 512, 00:11:07.750 "num_blocks": 1048576, 00:11:07.750 "name": "malloc1" 00:11:07.750 }, 00:11:07.750 "method": "bdev_malloc_create" 00:11:07.750 }, 00:11:07.750 { 00:11:07.750 "method": "bdev_wait_for_examine" 00:11:07.750 } 00:11:07.750 ] 00:11:07.750 } 00:11:07.750 ] 00:11:07.750 } 00:11:07.750 [2024-11-27 06:06:12.739930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.750 [2024-11-27 06:06:12.809093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.008 [2024-11-27 06:06:12.867063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:09.382  [2024-11-27T06:06:15.414Z] Copying: 196/512 [MB] (196 MBps) [2024-11-27T06:06:15.980Z] Copying: 392/512 [MB] (195 MBps) [2024-11-27T06:06:16.545Z] Copying: 512/512 [MB] (average 195 MBps) 00:11:11.448 00:11:11.448 06:06:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:11:11.448 06:06:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:11:11.448 06:06:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:11.448 06:06:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:11:11.448 [2024-11-27 06:06:16.491354] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:11.448 [2024-11-27 06:06:16.491708] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61037 ] 00:11:11.448 { 00:11:11.448 "subsystems": [ 00:11:11.448 { 00:11:11.448 "subsystem": "bdev", 00:11:11.448 "config": [ 00:11:11.448 { 00:11:11.448 "params": { 00:11:11.448 "block_size": 512, 00:11:11.448 "num_blocks": 1048576, 00:11:11.448 "name": "malloc0" 00:11:11.448 }, 00:11:11.448 "method": "bdev_malloc_create" 00:11:11.448 }, 00:11:11.448 { 00:11:11.448 "params": { 00:11:11.448 "block_size": 512, 00:11:11.448 "num_blocks": 1048576, 00:11:11.448 "name": "malloc1" 00:11:11.448 }, 00:11:11.448 "method": "bdev_malloc_create" 00:11:11.448 }, 00:11:11.448 { 00:11:11.448 "method": "bdev_wait_for_examine" 00:11:11.448 } 00:11:11.448 ] 00:11:11.448 } 00:11:11.448 ] 00:11:11.448 } 00:11:11.707 [2024-11-27 06:06:16.640922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.707 [2024-11-27 06:06:16.715097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.707 [2024-11-27 06:06:16.774914] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:13.081  [2024-11-27T06:06:19.554Z] Copying: 181/512 [MB] (181 MBps) [2024-11-27T06:06:20.121Z] Copying: 378/512 [MB] (196 MBps) [2024-11-27T06:06:20.687Z] Copying: 512/512 [MB] (average 190 MBps) 00:11:15.590 00:11:15.590 00:11:15.590 real 0m7.883s 00:11:15.590 user 0m6.832s 00:11:15.590 sys 0m0.870s 00:11:15.590 ************************************ 00:11:15.590 END TEST dd_malloc_copy 00:11:15.590 ************************************ 00:11:15.590 06:06:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.590 06:06:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:11:15.590 ************************************ 00:11:15.590 END TEST spdk_dd_malloc 00:11:15.590 ************************************ 00:11:15.590 00:11:15.590 real 0m8.149s 00:11:15.590 user 0m6.992s 00:11:15.590 sys 0m0.976s 00:11:15.590 06:06:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.590 06:06:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:11:15.590 06:06:20 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:11:15.590 06:06:20 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:15.590 06:06:20 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.590 06:06:20 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:15.590 ************************************ 00:11:15.590 START TEST spdk_dd_bdev_to_bdev 00:11:15.590 ************************************ 00:11:15.590 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:11:15.590 * Looking for test storage... 00:11:15.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:15.590 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:15.591 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:11:15.591 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:15.591 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:15.591 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:15.591 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:15.591 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:15.591 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:11:15.591 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:11:15.591 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:11:15.591 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:11:15.591 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:11:15.591 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:11:15.591 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:11:15.591 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:15.591 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:11:15.591 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:11:15.591 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:15.591 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:15.591 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:11:15.850 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:11:15.850 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:15.850 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:11:15.850 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:11:15.850 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:11:15.850 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:15.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.851 --rc genhtml_branch_coverage=1 00:11:15.851 --rc genhtml_function_coverage=1 00:11:15.851 --rc genhtml_legend=1 00:11:15.851 --rc geninfo_all_blocks=1 00:11:15.851 --rc geninfo_unexecuted_blocks=1 00:11:15.851 00:11:15.851 ' 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:15.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.851 --rc genhtml_branch_coverage=1 00:11:15.851 --rc genhtml_function_coverage=1 00:11:15.851 --rc genhtml_legend=1 00:11:15.851 --rc geninfo_all_blocks=1 00:11:15.851 --rc geninfo_unexecuted_blocks=1 00:11:15.851 00:11:15.851 ' 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:15.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.851 --rc genhtml_branch_coverage=1 00:11:15.851 --rc genhtml_function_coverage=1 00:11:15.851 --rc genhtml_legend=1 00:11:15.851 --rc geninfo_all_blocks=1 00:11:15.851 --rc geninfo_unexecuted_blocks=1 00:11:15.851 00:11:15.851 ' 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:15.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.851 --rc genhtml_branch_coverage=1 00:11:15.851 --rc genhtml_function_coverage=1 00:11:15.851 --rc genhtml_legend=1 00:11:15.851 --rc geninfo_all_blocks=1 00:11:15.851 --rc geninfo_unexecuted_blocks=1 00:11:15.851 00:11:15.851 ' 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:15.851 ************************************ 00:11:15.851 START TEST dd_inflate_file 00:11:15.851 ************************************ 00:11:15.851 06:06:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:11:15.851 [2024-11-27 06:06:20.775523] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:15.851 [2024-11-27 06:06:20.775681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61155 ] 00:11:15.851 [2024-11-27 06:06:20.928473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.110 [2024-11-27 06:06:21.003562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.110 [2024-11-27 06:06:21.064590] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:16.110  [2024-11-27T06:06:21.487Z] Copying: 64/64 [MB] (average 1488 MBps) 00:11:16.390 00:11:16.390 00:11:16.390 real 0m0.646s 00:11:16.390 user 0m0.385s 00:11:16.390 sys 0m0.330s 00:11:16.390 06:06:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.390 06:06:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:11:16.390 ************************************ 00:11:16.390 END TEST dd_inflate_file 00:11:16.390 ************************************ 00:11:16.390 06:06:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:11:16.390 06:06:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:11:16.390 06:06:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:11:16.390 06:06:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:11:16.390 06:06:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:11:16.390 06:06:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:16.390 06:06:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.390 06:06:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:16.390 06:06:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:16.390 ************************************ 00:11:16.390 START TEST dd_copy_to_out_bdev 00:11:16.390 ************************************ 00:11:16.390 06:06:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:11:16.390 { 00:11:16.390 "subsystems": [ 00:11:16.390 { 00:11:16.390 "subsystem": "bdev", 00:11:16.390 "config": [ 00:11:16.390 { 00:11:16.390 "params": { 00:11:16.390 "trtype": "pcie", 00:11:16.390 "traddr": "0000:00:10.0", 00:11:16.390 "name": "Nvme0" 00:11:16.390 }, 00:11:16.390 "method": "bdev_nvme_attach_controller" 00:11:16.390 }, 00:11:16.390 { 00:11:16.390 "params": { 00:11:16.390 "trtype": "pcie", 00:11:16.390 "traddr": "0000:00:11.0", 00:11:16.390 "name": "Nvme1" 00:11:16.390 }, 00:11:16.390 "method": "bdev_nvme_attach_controller" 00:11:16.390 }, 00:11:16.390 { 00:11:16.390 "method": "bdev_wait_for_examine" 00:11:16.390 } 00:11:16.390 ] 00:11:16.390 } 00:11:16.390 ] 00:11:16.390 } 00:11:16.390 [2024-11-27 06:06:21.468427] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:16.390 [2024-11-27 06:06:21.468557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61195 ] 00:11:16.672 [2024-11-27 06:06:21.617458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.672 [2024-11-27 06:06:21.683079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.672 [2024-11-27 06:06:21.739480] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:18.048  [2024-11-27T06:06:23.145Z] Copying: 55/64 [MB] (55 MBps) [2024-11-27T06:06:23.404Z] Copying: 64/64 [MB] (average 56 MBps) 00:11:18.307 00:11:18.307 00:11:18.307 real 0m1.891s 00:11:18.307 user 0m1.646s 00:11:18.307 sys 0m1.515s 00:11:18.307 06:06:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.307 06:06:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:18.307 ************************************ 00:11:18.307 END TEST dd_copy_to_out_bdev 00:11:18.307 ************************************ 00:11:18.307 06:06:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:11:18.307 06:06:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:11:18.307 06:06:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:18.307 06:06:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.307 06:06:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:18.307 ************************************ 00:11:18.307 START TEST dd_offset_magic 00:11:18.307 ************************************ 00:11:18.307 06:06:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:11:18.307 06:06:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:11:18.307 06:06:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:11:18.307 06:06:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:11:18.307 06:06:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:11:18.307 06:06:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:11:18.307 06:06:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:11:18.307 06:06:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:18.307 06:06:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:18.566 [2024-11-27 06:06:23.423429] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:18.566 [2024-11-27 06:06:23.423556] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61234 ] 00:11:18.566 { 00:11:18.566 "subsystems": [ 00:11:18.566 { 00:11:18.566 "subsystem": "bdev", 00:11:18.566 "config": [ 00:11:18.566 { 00:11:18.566 "params": { 00:11:18.566 "trtype": "pcie", 00:11:18.566 "traddr": "0000:00:10.0", 00:11:18.566 "name": "Nvme0" 00:11:18.566 }, 00:11:18.566 "method": "bdev_nvme_attach_controller" 00:11:18.566 }, 00:11:18.566 { 00:11:18.566 "params": { 00:11:18.566 "trtype": "pcie", 00:11:18.566 "traddr": "0000:00:11.0", 00:11:18.566 "name": "Nvme1" 00:11:18.566 }, 00:11:18.566 "method": "bdev_nvme_attach_controller" 00:11:18.566 }, 00:11:18.566 { 00:11:18.566 "method": "bdev_wait_for_examine" 00:11:18.566 } 00:11:18.566 ] 00:11:18.566 } 00:11:18.566 ] 00:11:18.566 } 00:11:18.566 [2024-11-27 06:06:23.568883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.566 [2024-11-27 06:06:23.633291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.824 [2024-11-27 06:06:23.688721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:19.082  [2024-11-27T06:06:24.438Z] Copying: 65/65 [MB] (average 855 MBps) 00:11:19.341 00:11:19.341 06:06:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:11:19.341 06:06:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:11:19.341 06:06:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:19.341 06:06:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:19.341 [2024-11-27 06:06:24.375181] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:19.341 [2024-11-27 06:06:24.375288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61254 ] 00:11:19.341 { 00:11:19.341 "subsystems": [ 00:11:19.341 { 00:11:19.341 "subsystem": "bdev", 00:11:19.341 "config": [ 00:11:19.341 { 00:11:19.341 "params": { 00:11:19.341 "trtype": "pcie", 00:11:19.341 "traddr": "0000:00:10.0", 00:11:19.341 "name": "Nvme0" 00:11:19.341 }, 00:11:19.341 "method": "bdev_nvme_attach_controller" 00:11:19.341 }, 00:11:19.341 { 00:11:19.341 "params": { 00:11:19.341 "trtype": "pcie", 00:11:19.341 "traddr": "0000:00:11.0", 00:11:19.341 "name": "Nvme1" 00:11:19.341 }, 00:11:19.341 "method": "bdev_nvme_attach_controller" 00:11:19.341 }, 00:11:19.341 { 00:11:19.341 "method": "bdev_wait_for_examine" 00:11:19.341 } 00:11:19.341 ] 00:11:19.341 } 00:11:19.341 ] 00:11:19.341 } 00:11:19.599 [2024-11-27 06:06:24.521742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.599 [2024-11-27 06:06:24.586312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.599 [2024-11-27 06:06:24.642745] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:19.859  [2024-11-27T06:06:25.214Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:11:20.117 00:11:20.117 06:06:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:11:20.117 06:06:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:11:20.117 06:06:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:11:20.117 06:06:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:11:20.117 06:06:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:11:20.117 06:06:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:20.117 06:06:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:20.117 { 00:11:20.117 "subsystems": [ 00:11:20.117 { 00:11:20.117 "subsystem": "bdev", 00:11:20.117 "config": [ 00:11:20.117 { 00:11:20.117 "params": { 00:11:20.117 "trtype": "pcie", 00:11:20.117 "traddr": "0000:00:10.0", 00:11:20.117 "name": "Nvme0" 00:11:20.117 }, 00:11:20.117 "method": "bdev_nvme_attach_controller" 00:11:20.117 }, 00:11:20.117 { 00:11:20.117 "params": { 00:11:20.117 "trtype": "pcie", 00:11:20.117 "traddr": "0000:00:11.0", 00:11:20.117 "name": "Nvme1" 00:11:20.117 }, 00:11:20.117 "method": "bdev_nvme_attach_controller" 00:11:20.117 }, 00:11:20.117 { 00:11:20.117 "method": "bdev_wait_for_examine" 00:11:20.117 } 00:11:20.117 ] 00:11:20.117 } 00:11:20.117 ] 00:11:20.117 } 00:11:20.117 [2024-11-27 06:06:25.091109] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:20.117 [2024-11-27 06:06:25.091243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61271 ] 00:11:20.376 [2024-11-27 06:06:25.238418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.376 [2024-11-27 06:06:25.303543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.376 [2024-11-27 06:06:25.362301] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:20.634  [2024-11-27T06:06:25.989Z] Copying: 65/65 [MB] (average 928 MBps) 00:11:20.892 00:11:20.892 06:06:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:11:20.892 06:06:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:11:20.892 06:06:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:20.892 06:06:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:20.893 [2024-11-27 06:06:25.936592] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:20.893 [2024-11-27 06:06:25.936702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61291 ] 00:11:20.893 { 00:11:20.893 "subsystems": [ 00:11:20.893 { 00:11:20.893 "subsystem": "bdev", 00:11:20.893 "config": [ 00:11:20.893 { 00:11:20.893 "params": { 00:11:20.893 "trtype": "pcie", 00:11:20.893 "traddr": "0000:00:10.0", 00:11:20.893 "name": "Nvme0" 00:11:20.893 }, 00:11:20.893 "method": "bdev_nvme_attach_controller" 00:11:20.893 }, 00:11:20.893 { 00:11:20.893 "params": { 00:11:20.893 "trtype": "pcie", 00:11:20.893 "traddr": "0000:00:11.0", 00:11:20.893 "name": "Nvme1" 00:11:20.893 }, 00:11:20.893 "method": "bdev_nvme_attach_controller" 00:11:20.893 }, 00:11:20.893 { 00:11:20.893 "method": "bdev_wait_for_examine" 00:11:20.893 } 00:11:20.893 ] 00:11:20.893 } 00:11:20.893 ] 00:11:20.893 } 00:11:21.151 [2024-11-27 06:06:26.088019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.151 [2024-11-27 06:06:26.153838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.151 [2024-11-27 06:06:26.209435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:21.413  [2024-11-27T06:06:26.775Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:21.678 00:11:21.678 06:06:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:11:21.678 06:06:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:11:21.678 00:11:21.678 real 0m3.235s 00:11:21.678 user 0m2.366s 00:11:21.678 sys 0m0.976s 00:11:21.678 06:06:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.678 ************************************ 00:11:21.678 END TEST dd_offset_magic 00:11:21.678 ************************************ 00:11:21.678 06:06:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:21.678 06:06:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:11:21.678 06:06:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:11:21.678 06:06:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:21.678 06:06:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:11:21.678 06:06:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:11:21.678 06:06:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:11:21.678 06:06:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:11:21.679 06:06:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:11:21.679 06:06:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:11:21.679 06:06:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:21.679 06:06:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:21.679 [2024-11-27 06:06:26.692228] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:21.679 [2024-11-27 06:06:26.692335] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61329 ] 00:11:21.679 { 00:11:21.679 "subsystems": [ 00:11:21.679 { 00:11:21.679 "subsystem": "bdev", 00:11:21.679 "config": [ 00:11:21.679 { 00:11:21.679 "params": { 00:11:21.679 "trtype": "pcie", 00:11:21.679 "traddr": "0000:00:10.0", 00:11:21.679 "name": "Nvme0" 00:11:21.679 }, 00:11:21.679 "method": "bdev_nvme_attach_controller" 00:11:21.679 }, 00:11:21.679 { 00:11:21.679 "params": { 00:11:21.679 "trtype": "pcie", 00:11:21.679 "traddr": "0000:00:11.0", 00:11:21.679 "name": "Nvme1" 00:11:21.679 }, 00:11:21.679 "method": "bdev_nvme_attach_controller" 00:11:21.679 }, 00:11:21.679 { 00:11:21.679 "method": "bdev_wait_for_examine" 00:11:21.679 } 00:11:21.679 ] 00:11:21.679 } 00:11:21.679 ] 00:11:21.679 } 00:11:21.937 [2024-11-27 06:06:26.845828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.937 [2024-11-27 06:06:26.914984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.937 [2024-11-27 06:06:26.972628] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:22.195  [2024-11-27T06:06:27.550Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:11:22.453 00:11:22.453 06:06:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:11:22.453 06:06:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:11:22.453 06:06:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:11:22.453 06:06:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:11:22.453 06:06:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:11:22.453 06:06:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:11:22.453 06:06:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:11:22.453 06:06:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:11:22.453 06:06:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:22.453 06:06:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:22.453 { 00:11:22.453 "subsystems": [ 00:11:22.453 { 00:11:22.453 "subsystem": "bdev", 00:11:22.453 "config": [ 00:11:22.453 { 00:11:22.453 "params": { 00:11:22.453 "trtype": "pcie", 00:11:22.453 "traddr": "0000:00:10.0", 00:11:22.453 "name": "Nvme0" 00:11:22.453 }, 00:11:22.453 "method": "bdev_nvme_attach_controller" 00:11:22.453 }, 00:11:22.453 { 00:11:22.453 "params": { 00:11:22.453 "trtype": "pcie", 00:11:22.453 "traddr": "0000:00:11.0", 00:11:22.453 "name": "Nvme1" 00:11:22.453 }, 00:11:22.453 "method": "bdev_nvme_attach_controller" 00:11:22.453 }, 00:11:22.453 { 00:11:22.453 "method": "bdev_wait_for_examine" 00:11:22.453 } 00:11:22.453 ] 00:11:22.453 } 00:11:22.453 ] 00:11:22.453 } 00:11:22.453 [2024-11-27 06:06:27.419154] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:22.453 [2024-11-27 06:06:27.419259] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61344 ] 00:11:22.711 [2024-11-27 06:06:27.562443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.711 [2024-11-27 06:06:27.625482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.711 [2024-11-27 06:06:27.682848] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:22.969  [2024-11-27T06:06:28.066Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:11:22.969 00:11:23.228 06:06:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:11:23.228 00:11:23.228 real 0m7.582s 00:11:23.228 user 0m5.577s 00:11:23.228 sys 0m3.556s 00:11:23.228 06:06:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.228 06:06:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:23.228 ************************************ 00:11:23.228 END TEST spdk_dd_bdev_to_bdev 00:11:23.228 ************************************ 00:11:23.228 06:06:28 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:11:23.228 06:06:28 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:11:23.228 06:06:28 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:23.228 06:06:28 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.228 06:06:28 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:23.228 ************************************ 00:11:23.228 START TEST spdk_dd_uring 00:11:23.228 ************************************ 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:11:23.228 * Looking for test storage... 00:11:23.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:23.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.228 --rc genhtml_branch_coverage=1 00:11:23.228 --rc genhtml_function_coverage=1 00:11:23.228 --rc genhtml_legend=1 00:11:23.228 --rc geninfo_all_blocks=1 00:11:23.228 --rc geninfo_unexecuted_blocks=1 00:11:23.228 00:11:23.228 ' 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:23.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.228 --rc genhtml_branch_coverage=1 00:11:23.228 --rc genhtml_function_coverage=1 00:11:23.228 --rc genhtml_legend=1 00:11:23.228 --rc geninfo_all_blocks=1 00:11:23.228 --rc geninfo_unexecuted_blocks=1 00:11:23.228 00:11:23.228 ' 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:23.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.228 --rc genhtml_branch_coverage=1 00:11:23.228 --rc genhtml_function_coverage=1 00:11:23.228 --rc genhtml_legend=1 00:11:23.228 --rc geninfo_all_blocks=1 00:11:23.228 --rc geninfo_unexecuted_blocks=1 00:11:23.228 00:11:23.228 ' 00:11:23.228 06:06:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:23.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.228 --rc genhtml_branch_coverage=1 00:11:23.228 --rc genhtml_function_coverage=1 00:11:23.228 --rc genhtml_legend=1 00:11:23.229 --rc geninfo_all_blocks=1 00:11:23.229 --rc geninfo_unexecuted_blocks=1 00:11:23.229 00:11:23.229 ' 00:11:23.229 06:06:28 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:23.229 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:11:23.487 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.487 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.487 06:06:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.487 06:06:28 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.487 06:06:28 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.487 06:06:28 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.487 06:06:28 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:11:23.487 06:06:28 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.487 06:06:28 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:11:23.487 06:06:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:23.487 06:06:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.487 06:06:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:11:23.487 ************************************ 00:11:23.487 START TEST dd_uring_copy 00:11:23.487 ************************************ 00:11:23.487 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:11:23.487 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:11:23.487 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=gbcgctlcqnsvnfobaqy9qpjbc4hi9abjsp6eqkcjxpfjba84y9xb6vkbotja6hjwon1hxwoupsslbyvzg4bkmxl0ex2drvyd3qwlbdm5vjy9g0uxsm8yb142i7qs3kyht2gx6mz9naqgher4bmb4tq0v8msxd0g4eazvqwutx53xcpcdy7y2cijdkk9434a1szoei96g1n6c2u06txa9qwmk2f154khlyyrjoq8ubqzxat1kccktsa191il9p1z5n9yr9uv5xlylzkoibpboeovqaz45dxd5x4dg3ai2l2wslguy42y3u62xj2q54hrl3nwyb8oouebgw6ed6j7p5vpv7ha9szjcqnsqqcx9j92wg877t6l9r03efw8c4ytihk4fe9mn78719p7c9qafl2hb9pfyb3dfu7r9ooxknyk19q15dcucehkd9h2mocsmompwn0axleevyqh2d1mf4y2f1nlva1tsya8r5bi6tm210fmnk32mfzlgd408rcynnkxpnnr4zcanlcaijulsz5766pzy0hy3b8azpcnxtoyacrxbung42q4b0dpayi3uqf48mir3s0r4xtz5vdtnzw1kxv4ial9xpnde44qpdcyjki3fxvcgg0m37prcsscu4pakmnoff1jes85ept0cq1hdmftu18d8tcfdbv0f9lgdd43gud5rksuqnchwct58236502oynqze454bid2y9o2qjb1ceqynavmxt0fk1vo72ei4tcp83ashn86rrzwcfma3edpc1ixba5jv81ryivgkjnt2exo7yaosy25mdeqomncmrg0g3nfxzgyd2kweal8qfb7h9u8ebq718c3zzjqyodaga29ls1nib4n633zsmfvcqvw8v195dqm227qk2y3qx5ehnr8h84o4nlrudnnop6qlv7qzoj2uhmxto9u0ixht06r917ipljqliuo754fft41yu3pzvshsl96yprvjl8pxihinikzhmf2wf54de922z0s3x4ruzid0urqq 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo gbcgctlcqnsvnfobaqy9qpjbc4hi9abjsp6eqkcjxpfjba84y9xb6vkbotja6hjwon1hxwoupsslbyvzg4bkmxl0ex2drvyd3qwlbdm5vjy9g0uxsm8yb142i7qs3kyht2gx6mz9naqgher4bmb4tq0v8msxd0g4eazvqwutx53xcpcdy7y2cijdkk9434a1szoei96g1n6c2u06txa9qwmk2f154khlyyrjoq8ubqzxat1kccktsa191il9p1z5n9yr9uv5xlylzkoibpboeovqaz45dxd5x4dg3ai2l2wslguy42y3u62xj2q54hrl3nwyb8oouebgw6ed6j7p5vpv7ha9szjcqnsqqcx9j92wg877t6l9r03efw8c4ytihk4fe9mn78719p7c9qafl2hb9pfyb3dfu7r9ooxknyk19q15dcucehkd9h2mocsmompwn0axleevyqh2d1mf4y2f1nlva1tsya8r5bi6tm210fmnk32mfzlgd408rcynnkxpnnr4zcanlcaijulsz5766pzy0hy3b8azpcnxtoyacrxbung42q4b0dpayi3uqf48mir3s0r4xtz5vdtnzw1kxv4ial9xpnde44qpdcyjki3fxvcgg0m37prcsscu4pakmnoff1jes85ept0cq1hdmftu18d8tcfdbv0f9lgdd43gud5rksuqnchwct58236502oynqze454bid2y9o2qjb1ceqynavmxt0fk1vo72ei4tcp83ashn86rrzwcfma3edpc1ixba5jv81ryivgkjnt2exo7yaosy25mdeqomncmrg0g3nfxzgyd2kweal8qfb7h9u8ebq718c3zzjqyodaga29ls1nib4n633zsmfvcqvw8v195dqm227qk2y3qx5ehnr8h84o4nlrudnnop6qlv7qzoj2uhmxto9u0ixht06r917ipljqliuo754fft41yu3pzvshsl96yprvjl8pxihinikzhmf2wf54de922z0s3x4ruzid0urqq 00:11:23.488 06:06:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:11:23.488 [2024-11-27 06:06:28.419733] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:23.488 [2024-11-27 06:06:28.419843] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61421 ] 00:11:23.488 [2024-11-27 06:06:28.572794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.747 [2024-11-27 06:06:28.639971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.747 [2024-11-27 06:06:28.696393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:24.321  [2024-11-27T06:06:29.992Z] Copying: 511/511 [MB] (average 1326 MBps) 00:11:24.895 00:11:24.895 06:06:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:11:24.895 06:06:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:11:24.895 06:06:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:24.895 06:06:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:24.895 [2024-11-27 06:06:29.742052] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:24.895 [2024-11-27 06:06:29.742188] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61444 ] 00:11:24.895 { 00:11:24.895 "subsystems": [ 00:11:24.895 { 00:11:24.895 "subsystem": "bdev", 00:11:24.895 "config": [ 00:11:24.895 { 00:11:24.895 "params": { 00:11:24.895 "block_size": 512, 00:11:24.895 "num_blocks": 1048576, 00:11:24.895 "name": "malloc0" 00:11:24.895 }, 00:11:24.895 "method": "bdev_malloc_create" 00:11:24.895 }, 00:11:24.895 { 00:11:24.895 "params": { 00:11:24.895 "filename": "/dev/zram1", 00:11:24.895 "name": "uring0" 00:11:24.895 }, 00:11:24.895 "method": "bdev_uring_create" 00:11:24.895 }, 00:11:24.895 { 00:11:24.895 "method": "bdev_wait_for_examine" 00:11:24.895 } 00:11:24.895 ] 00:11:24.895 } 00:11:24.895 ] 00:11:24.895 } 00:11:24.895 [2024-11-27 06:06:29.891177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.895 [2024-11-27 06:06:29.961926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.155 [2024-11-27 06:06:30.021658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:26.530  [2024-11-27T06:06:32.561Z] Copying: 218/512 [MB] (218 MBps) [2024-11-27T06:06:32.820Z] Copying: 433/512 [MB] (215 MBps) [2024-11-27T06:06:33.079Z] Copying: 512/512 [MB] (average 215 MBps) 00:11:27.982 00:11:27.982 06:06:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:11:27.982 06:06:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:11:27.982 06:06:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:27.982 06:06:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:27.982 [2024-11-27 06:06:33.060853] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:27.982 [2024-11-27 06:06:33.060953] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61488 ] 00:11:27.982 { 00:11:27.982 "subsystems": [ 00:11:27.982 { 00:11:27.982 "subsystem": "bdev", 00:11:27.982 "config": [ 00:11:27.982 { 00:11:27.982 "params": { 00:11:27.982 "block_size": 512, 00:11:27.982 "num_blocks": 1048576, 00:11:27.982 "name": "malloc0" 00:11:27.982 }, 00:11:27.982 "method": "bdev_malloc_create" 00:11:27.982 }, 00:11:27.982 { 00:11:27.982 "params": { 00:11:27.982 "filename": "/dev/zram1", 00:11:27.982 "name": "uring0" 00:11:27.982 }, 00:11:27.982 "method": "bdev_uring_create" 00:11:27.982 }, 00:11:27.982 { 00:11:27.982 "method": "bdev_wait_for_examine" 00:11:27.982 } 00:11:27.982 ] 00:11:27.982 } 00:11:27.982 ] 00:11:27.982 } 00:11:28.241 [2024-11-27 06:06:33.208230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.241 [2024-11-27 06:06:33.263042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.241 [2024-11-27 06:06:33.317360] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:29.613  [2024-11-27T06:06:35.641Z] Copying: 173/512 [MB] (173 MBps) [2024-11-27T06:06:36.572Z] Copying: 331/512 [MB] (158 MBps) [2024-11-27T06:06:36.829Z] Copying: 487/512 [MB] (155 MBps) [2024-11-27T06:06:37.088Z] Copying: 512/512 [MB] (average 162 MBps) 00:11:31.991 00:11:31.991 06:06:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:11:31.991 06:06:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ gbcgctlcqnsvnfobaqy9qpjbc4hi9abjsp6eqkcjxpfjba84y9xb6vkbotja6hjwon1hxwoupsslbyvzg4bkmxl0ex2drvyd3qwlbdm5vjy9g0uxsm8yb142i7qs3kyht2gx6mz9naqgher4bmb4tq0v8msxd0g4eazvqwutx53xcpcdy7y2cijdkk9434a1szoei96g1n6c2u06txa9qwmk2f154khlyyrjoq8ubqzxat1kccktsa191il9p1z5n9yr9uv5xlylzkoibpboeovqaz45dxd5x4dg3ai2l2wslguy42y3u62xj2q54hrl3nwyb8oouebgw6ed6j7p5vpv7ha9szjcqnsqqcx9j92wg877t6l9r03efw8c4ytihk4fe9mn78719p7c9qafl2hb9pfyb3dfu7r9ooxknyk19q15dcucehkd9h2mocsmompwn0axleevyqh2d1mf4y2f1nlva1tsya8r5bi6tm210fmnk32mfzlgd408rcynnkxpnnr4zcanlcaijulsz5766pzy0hy3b8azpcnxtoyacrxbung42q4b0dpayi3uqf48mir3s0r4xtz5vdtnzw1kxv4ial9xpnde44qpdcyjki3fxvcgg0m37prcsscu4pakmnoff1jes85ept0cq1hdmftu18d8tcfdbv0f9lgdd43gud5rksuqnchwct58236502oynqze454bid2y9o2qjb1ceqynavmxt0fk1vo72ei4tcp83ashn86rrzwcfma3edpc1ixba5jv81ryivgkjnt2exo7yaosy25mdeqomncmrg0g3nfxzgyd2kweal8qfb7h9u8ebq718c3zzjqyodaga29ls1nib4n633zsmfvcqvw8v195dqm227qk2y3qx5ehnr8h84o4nlrudnnop6qlv7qzoj2uhmxto9u0ixht06r917ipljqliuo754fft41yu3pzvshsl96yprvjl8pxihinikzhmf2wf54de922z0s3x4ruzid0urqq == \g\b\c\g\c\t\l\c\q\n\s\v\n\f\o\b\a\q\y\9\q\p\j\b\c\4\h\i\9\a\b\j\s\p\6\e\q\k\c\j\x\p\f\j\b\a\8\4\y\9\x\b\6\v\k\b\o\t\j\a\6\h\j\w\o\n\1\h\x\w\o\u\p\s\s\l\b\y\v\z\g\4\b\k\m\x\l\0\e\x\2\d\r\v\y\d\3\q\w\l\b\d\m\5\v\j\y\9\g\0\u\x\s\m\8\y\b\1\4\2\i\7\q\s\3\k\y\h\t\2\g\x\6\m\z\9\n\a\q\g\h\e\r\4\b\m\b\4\t\q\0\v\8\m\s\x\d\0\g\4\e\a\z\v\q\w\u\t\x\5\3\x\c\p\c\d\y\7\y\2\c\i\j\d\k\k\9\4\3\4\a\1\s\z\o\e\i\9\6\g\1\n\6\c\2\u\0\6\t\x\a\9\q\w\m\k\2\f\1\5\4\k\h\l\y\y\r\j\o\q\8\u\b\q\z\x\a\t\1\k\c\c\k\t\s\a\1\9\1\i\l\9\p\1\z\5\n\9\y\r\9\u\v\5\x\l\y\l\z\k\o\i\b\p\b\o\e\o\v\q\a\z\4\5\d\x\d\5\x\4\d\g\3\a\i\2\l\2\w\s\l\g\u\y\4\2\y\3\u\6\2\x\j\2\q\5\4\h\r\l\3\n\w\y\b\8\o\o\u\e\b\g\w\6\e\d\6\j\7\p\5\v\p\v\7\h\a\9\s\z\j\c\q\n\s\q\q\c\x\9\j\9\2\w\g\8\7\7\t\6\l\9\r\0\3\e\f\w\8\c\4\y\t\i\h\k\4\f\e\9\m\n\7\8\7\1\9\p\7\c\9\q\a\f\l\2\h\b\9\p\f\y\b\3\d\f\u\7\r\9\o\o\x\k\n\y\k\1\9\q\1\5\d\c\u\c\e\h\k\d\9\h\2\m\o\c\s\m\o\m\p\w\n\0\a\x\l\e\e\v\y\q\h\2\d\1\m\f\4\y\2\f\1\n\l\v\a\1\t\s\y\a\8\r\5\b\i\6\t\m\2\1\0\f\m\n\k\3\2\m\f\z\l\g\d\4\0\8\r\c\y\n\n\k\x\p\n\n\r\4\z\c\a\n\l\c\a\i\j\u\l\s\z\5\7\6\6\p\z\y\0\h\y\3\b\8\a\z\p\c\n\x\t\o\y\a\c\r\x\b\u\n\g\4\2\q\4\b\0\d\p\a\y\i\3\u\q\f\4\8\m\i\r\3\s\0\r\4\x\t\z\5\v\d\t\n\z\w\1\k\x\v\4\i\a\l\9\x\p\n\d\e\4\4\q\p\d\c\y\j\k\i\3\f\x\v\c\g\g\0\m\3\7\p\r\c\s\s\c\u\4\p\a\k\m\n\o\f\f\1\j\e\s\8\5\e\p\t\0\c\q\1\h\d\m\f\t\u\1\8\d\8\t\c\f\d\b\v\0\f\9\l\g\d\d\4\3\g\u\d\5\r\k\s\u\q\n\c\h\w\c\t\5\8\2\3\6\5\0\2\o\y\n\q\z\e\4\5\4\b\i\d\2\y\9\o\2\q\j\b\1\c\e\q\y\n\a\v\m\x\t\0\f\k\1\v\o\7\2\e\i\4\t\c\p\8\3\a\s\h\n\8\6\r\r\z\w\c\f\m\a\3\e\d\p\c\1\i\x\b\a\5\j\v\8\1\r\y\i\v\g\k\j\n\t\2\e\x\o\7\y\a\o\s\y\2\5\m\d\e\q\o\m\n\c\m\r\g\0\g\3\n\f\x\z\g\y\d\2\k\w\e\a\l\8\q\f\b\7\h\9\u\8\e\b\q\7\1\8\c\3\z\z\j\q\y\o\d\a\g\a\2\9\l\s\1\n\i\b\4\n\6\3\3\z\s\m\f\v\c\q\v\w\8\v\1\9\5\d\q\m\2\2\7\q\k\2\y\3\q\x\5\e\h\n\r\8\h\8\4\o\4\n\l\r\u\d\n\n\o\p\6\q\l\v\7\q\z\o\j\2\u\h\m\x\t\o\9\u\0\i\x\h\t\0\6\r\9\1\7\i\p\l\j\q\l\i\u\o\7\5\4\f\f\t\4\1\y\u\3\p\z\v\s\h\s\l\9\6\y\p\r\v\j\l\8\p\x\i\h\i\n\i\k\z\h\m\f\2\w\f\5\4\d\e\9\2\2\z\0\s\3\x\4\r\u\z\i\d\0\u\r\q\q ]] 00:11:31.991 06:06:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:11:31.991 06:06:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ gbcgctlcqnsvnfobaqy9qpjbc4hi9abjsp6eqkcjxpfjba84y9xb6vkbotja6hjwon1hxwoupsslbyvzg4bkmxl0ex2drvyd3qwlbdm5vjy9g0uxsm8yb142i7qs3kyht2gx6mz9naqgher4bmb4tq0v8msxd0g4eazvqwutx53xcpcdy7y2cijdkk9434a1szoei96g1n6c2u06txa9qwmk2f154khlyyrjoq8ubqzxat1kccktsa191il9p1z5n9yr9uv5xlylzkoibpboeovqaz45dxd5x4dg3ai2l2wslguy42y3u62xj2q54hrl3nwyb8oouebgw6ed6j7p5vpv7ha9szjcqnsqqcx9j92wg877t6l9r03efw8c4ytihk4fe9mn78719p7c9qafl2hb9pfyb3dfu7r9ooxknyk19q15dcucehkd9h2mocsmompwn0axleevyqh2d1mf4y2f1nlva1tsya8r5bi6tm210fmnk32mfzlgd408rcynnkxpnnr4zcanlcaijulsz5766pzy0hy3b8azpcnxtoyacrxbung42q4b0dpayi3uqf48mir3s0r4xtz5vdtnzw1kxv4ial9xpnde44qpdcyjki3fxvcgg0m37prcsscu4pakmnoff1jes85ept0cq1hdmftu18d8tcfdbv0f9lgdd43gud5rksuqnchwct58236502oynqze454bid2y9o2qjb1ceqynavmxt0fk1vo72ei4tcp83ashn86rrzwcfma3edpc1ixba5jv81ryivgkjnt2exo7yaosy25mdeqomncmrg0g3nfxzgyd2kweal8qfb7h9u8ebq718c3zzjqyodaga29ls1nib4n633zsmfvcqvw8v195dqm227qk2y3qx5ehnr8h84o4nlrudnnop6qlv7qzoj2uhmxto9u0ixht06r917ipljqliuo754fft41yu3pzvshsl96yprvjl8pxihinikzhmf2wf54de922z0s3x4ruzid0urqq == \g\b\c\g\c\t\l\c\q\n\s\v\n\f\o\b\a\q\y\9\q\p\j\b\c\4\h\i\9\a\b\j\s\p\6\e\q\k\c\j\x\p\f\j\b\a\8\4\y\9\x\b\6\v\k\b\o\t\j\a\6\h\j\w\o\n\1\h\x\w\o\u\p\s\s\l\b\y\v\z\g\4\b\k\m\x\l\0\e\x\2\d\r\v\y\d\3\q\w\l\b\d\m\5\v\j\y\9\g\0\u\x\s\m\8\y\b\1\4\2\i\7\q\s\3\k\y\h\t\2\g\x\6\m\z\9\n\a\q\g\h\e\r\4\b\m\b\4\t\q\0\v\8\m\s\x\d\0\g\4\e\a\z\v\q\w\u\t\x\5\3\x\c\p\c\d\y\7\y\2\c\i\j\d\k\k\9\4\3\4\a\1\s\z\o\e\i\9\6\g\1\n\6\c\2\u\0\6\t\x\a\9\q\w\m\k\2\f\1\5\4\k\h\l\y\y\r\j\o\q\8\u\b\q\z\x\a\t\1\k\c\c\k\t\s\a\1\9\1\i\l\9\p\1\z\5\n\9\y\r\9\u\v\5\x\l\y\l\z\k\o\i\b\p\b\o\e\o\v\q\a\z\4\5\d\x\d\5\x\4\d\g\3\a\i\2\l\2\w\s\l\g\u\y\4\2\y\3\u\6\2\x\j\2\q\5\4\h\r\l\3\n\w\y\b\8\o\o\u\e\b\g\w\6\e\d\6\j\7\p\5\v\p\v\7\h\a\9\s\z\j\c\q\n\s\q\q\c\x\9\j\9\2\w\g\8\7\7\t\6\l\9\r\0\3\e\f\w\8\c\4\y\t\i\h\k\4\f\e\9\m\n\7\8\7\1\9\p\7\c\9\q\a\f\l\2\h\b\9\p\f\y\b\3\d\f\u\7\r\9\o\o\x\k\n\y\k\1\9\q\1\5\d\c\u\c\e\h\k\d\9\h\2\m\o\c\s\m\o\m\p\w\n\0\a\x\l\e\e\v\y\q\h\2\d\1\m\f\4\y\2\f\1\n\l\v\a\1\t\s\y\a\8\r\5\b\i\6\t\m\2\1\0\f\m\n\k\3\2\m\f\z\l\g\d\4\0\8\r\c\y\n\n\k\x\p\n\n\r\4\z\c\a\n\l\c\a\i\j\u\l\s\z\5\7\6\6\p\z\y\0\h\y\3\b\8\a\z\p\c\n\x\t\o\y\a\c\r\x\b\u\n\g\4\2\q\4\b\0\d\p\a\y\i\3\u\q\f\4\8\m\i\r\3\s\0\r\4\x\t\z\5\v\d\t\n\z\w\1\k\x\v\4\i\a\l\9\x\p\n\d\e\4\4\q\p\d\c\y\j\k\i\3\f\x\v\c\g\g\0\m\3\7\p\r\c\s\s\c\u\4\p\a\k\m\n\o\f\f\1\j\e\s\8\5\e\p\t\0\c\q\1\h\d\m\f\t\u\1\8\d\8\t\c\f\d\b\v\0\f\9\l\g\d\d\4\3\g\u\d\5\r\k\s\u\q\n\c\h\w\c\t\5\8\2\3\6\5\0\2\o\y\n\q\z\e\4\5\4\b\i\d\2\y\9\o\2\q\j\b\1\c\e\q\y\n\a\v\m\x\t\0\f\k\1\v\o\7\2\e\i\4\t\c\p\8\3\a\s\h\n\8\6\r\r\z\w\c\f\m\a\3\e\d\p\c\1\i\x\b\a\5\j\v\8\1\r\y\i\v\g\k\j\n\t\2\e\x\o\7\y\a\o\s\y\2\5\m\d\e\q\o\m\n\c\m\r\g\0\g\3\n\f\x\z\g\y\d\2\k\w\e\a\l\8\q\f\b\7\h\9\u\8\e\b\q\7\1\8\c\3\z\z\j\q\y\o\d\a\g\a\2\9\l\s\1\n\i\b\4\n\6\3\3\z\s\m\f\v\c\q\v\w\8\v\1\9\5\d\q\m\2\2\7\q\k\2\y\3\q\x\5\e\h\n\r\8\h\8\4\o\4\n\l\r\u\d\n\n\o\p\6\q\l\v\7\q\z\o\j\2\u\h\m\x\t\o\9\u\0\i\x\h\t\0\6\r\9\1\7\i\p\l\j\q\l\i\u\o\7\5\4\f\f\t\4\1\y\u\3\p\z\v\s\h\s\l\9\6\y\p\r\v\j\l\8\p\x\i\h\i\n\i\k\z\h\m\f\2\w\f\5\4\d\e\9\2\2\z\0\s\3\x\4\r\u\z\i\d\0\u\r\q\q ]] 00:11:31.991 06:06:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:32.557 06:06:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:11:32.557 06:06:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:11:32.557 06:06:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:32.557 06:06:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:32.557 [2024-11-27 06:06:37.576819] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:32.557 [2024-11-27 06:06:37.576926] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61561 ] 00:11:32.557 { 00:11:32.557 "subsystems": [ 00:11:32.557 { 00:11:32.557 "subsystem": "bdev", 00:11:32.557 "config": [ 00:11:32.557 { 00:11:32.557 "params": { 00:11:32.557 "block_size": 512, 00:11:32.557 "num_blocks": 1048576, 00:11:32.557 "name": "malloc0" 00:11:32.557 }, 00:11:32.557 "method": "bdev_malloc_create" 00:11:32.557 }, 00:11:32.557 { 00:11:32.557 "params": { 00:11:32.557 "filename": "/dev/zram1", 00:11:32.557 "name": "uring0" 00:11:32.557 }, 00:11:32.557 "method": "bdev_uring_create" 00:11:32.557 }, 00:11:32.557 { 00:11:32.557 "method": "bdev_wait_for_examine" 00:11:32.557 } 00:11:32.557 ] 00:11:32.557 } 00:11:32.557 ] 00:11:32.557 } 00:11:32.816 [2024-11-27 06:06:37.726224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.816 [2024-11-27 06:06:37.795224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.816 [2024-11-27 06:06:37.854621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:34.192  [2024-11-27T06:06:40.222Z] Copying: 142/512 [MB] (142 MBps) [2024-11-27T06:06:41.157Z] Copying: 294/512 [MB] (151 MBps) [2024-11-27T06:06:41.724Z] Copying: 431/512 [MB] (137 MBps) [2024-11-27T06:06:42.291Z] Copying: 512/512 [MB] (average 144 MBps) 00:11:37.195 00:11:37.195 06:06:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:11:37.195 06:06:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:11:37.195 06:06:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:11:37.195 06:06:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:11:37.195 06:06:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:11:37.195 06:06:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:11:37.195 06:06:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:37.195 06:06:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:37.195 { 00:11:37.195 "subsystems": [ 00:11:37.195 { 00:11:37.195 "subsystem": "bdev", 00:11:37.195 "config": [ 00:11:37.195 { 00:11:37.195 "params": { 00:11:37.195 "block_size": 512, 00:11:37.195 "num_blocks": 1048576, 00:11:37.195 "name": "malloc0" 00:11:37.195 }, 00:11:37.195 "method": "bdev_malloc_create" 00:11:37.195 }, 00:11:37.195 { 00:11:37.195 "params": { 00:11:37.195 "filename": "/dev/zram1", 00:11:37.195 "name": "uring0" 00:11:37.195 }, 00:11:37.195 "method": "bdev_uring_create" 00:11:37.195 }, 00:11:37.195 { 00:11:37.195 "params": { 00:11:37.195 "name": "uring0" 00:11:37.195 }, 00:11:37.195 "method": "bdev_uring_delete" 00:11:37.195 }, 00:11:37.195 { 00:11:37.195 "method": "bdev_wait_for_examine" 00:11:37.195 } 00:11:37.195 ] 00:11:37.195 } 00:11:37.195 ] 00:11:37.195 } 00:11:37.195 [2024-11-27 06:06:42.055925] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:37.195 [2024-11-27 06:06:42.056061] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61628 ] 00:11:37.195 [2024-11-27 06:06:42.205577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.195 [2024-11-27 06:06:42.270580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.454 [2024-11-27 06:06:42.326507] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:37.454  [2024-11-27T06:06:43.118Z] Copying: 0/0 [B] (average 0 Bps) 00:11:38.021 00:11:38.021 06:06:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:11:38.021 06:06:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:11:38.021 06:06:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:11:38.021 06:06:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:11:38.021 06:06:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:38.021 06:06:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:11:38.021 06:06:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:38.021 06:06:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:38.021 06:06:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:38.021 06:06:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:38.021 06:06:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:38.021 06:06:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:38.021 06:06:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:38.021 06:06:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:38.021 06:06:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:38.021 06:06:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:11:38.021 [2024-11-27 06:06:42.982876] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:38.021 [2024-11-27 06:06:42.983018] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61657 ] 00:11:38.021 { 00:11:38.021 "subsystems": [ 00:11:38.021 { 00:11:38.021 "subsystem": "bdev", 00:11:38.021 "config": [ 00:11:38.021 { 00:11:38.021 "params": { 00:11:38.021 "block_size": 512, 00:11:38.021 "num_blocks": 1048576, 00:11:38.021 "name": "malloc0" 00:11:38.021 }, 00:11:38.021 "method": "bdev_malloc_create" 00:11:38.021 }, 00:11:38.021 { 00:11:38.021 "params": { 00:11:38.021 "filename": "/dev/zram1", 00:11:38.021 "name": "uring0" 00:11:38.021 }, 00:11:38.021 "method": "bdev_uring_create" 00:11:38.021 }, 00:11:38.021 { 00:11:38.021 "params": { 00:11:38.021 "name": "uring0" 00:11:38.021 }, 00:11:38.021 "method": "bdev_uring_delete" 00:11:38.021 }, 00:11:38.021 { 00:11:38.021 "method": "bdev_wait_for_examine" 00:11:38.021 } 00:11:38.021 ] 00:11:38.021 } 00:11:38.021 ] 00:11:38.021 } 00:11:38.279 [2024-11-27 06:06:43.136670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.279 [2024-11-27 06:06:43.211371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.279 [2024-11-27 06:06:43.272448] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:38.537 [2024-11-27 06:06:43.493879] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:11:38.537 [2024-11-27 06:06:43.493950] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:11:38.537 [2024-11-27 06:06:43.493963] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:11:38.537 [2024-11-27 06:06:43.493973] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:38.796 [2024-11-27 06:06:43.821281] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:38.796 06:06:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:11:38.796 06:06:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:38.796 06:06:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:11:38.796 06:06:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:11:38.796 06:06:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:11:38.796 06:06:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:38.796 06:06:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:11:38.796 06:06:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:11:38.796 06:06:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:11:38.796 06:06:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:11:39.056 06:06:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:11:39.056 06:06:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:39.315 00:11:39.315 real 0m15.821s 00:11:39.315 user 0m10.621s 00:11:39.315 sys 0m13.395s 00:11:39.315 06:06:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.315 06:06:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:39.315 ************************************ 00:11:39.315 END TEST dd_uring_copy 00:11:39.315 ************************************ 00:11:39.315 00:11:39.315 real 0m16.060s 00:11:39.315 user 0m10.759s 00:11:39.315 sys 0m13.501s 00:11:39.315 06:06:44 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.315 ************************************ 00:11:39.315 END TEST spdk_dd_uring 00:11:39.315 ************************************ 00:11:39.315 06:06:44 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:11:39.315 06:06:44 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:11:39.315 06:06:44 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:39.315 06:06:44 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.315 06:06:44 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:39.315 ************************************ 00:11:39.315 START TEST spdk_dd_sparse 00:11:39.315 ************************************ 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:11:39.315 * Looking for test storage... 00:11:39.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:39.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.315 --rc genhtml_branch_coverage=1 00:11:39.315 --rc genhtml_function_coverage=1 00:11:39.315 --rc genhtml_legend=1 00:11:39.315 --rc geninfo_all_blocks=1 00:11:39.315 --rc geninfo_unexecuted_blocks=1 00:11:39.315 00:11:39.315 ' 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:39.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.315 --rc genhtml_branch_coverage=1 00:11:39.315 --rc genhtml_function_coverage=1 00:11:39.315 --rc genhtml_legend=1 00:11:39.315 --rc geninfo_all_blocks=1 00:11:39.315 --rc geninfo_unexecuted_blocks=1 00:11:39.315 00:11:39.315 ' 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:39.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.315 --rc genhtml_branch_coverage=1 00:11:39.315 --rc genhtml_function_coverage=1 00:11:39.315 --rc genhtml_legend=1 00:11:39.315 --rc geninfo_all_blocks=1 00:11:39.315 --rc geninfo_unexecuted_blocks=1 00:11:39.315 00:11:39.315 ' 00:11:39.315 06:06:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:39.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.315 --rc genhtml_branch_coverage=1 00:11:39.315 --rc genhtml_function_coverage=1 00:11:39.315 --rc genhtml_legend=1 00:11:39.315 --rc geninfo_all_blocks=1 00:11:39.315 --rc geninfo_unexecuted_blocks=1 00:11:39.315 00:11:39.315 ' 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:11:39.574 1+0 records in 00:11:39.574 1+0 records out 00:11:39.574 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00608354 s, 689 MB/s 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:11:39.574 1+0 records in 00:11:39.574 1+0 records out 00:11:39.574 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00778252 s, 539 MB/s 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:11:39.574 1+0 records in 00:11:39.574 1+0 records out 00:11:39.574 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00363439 s, 1.2 GB/s 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:39.574 ************************************ 00:11:39.574 START TEST dd_sparse_file_to_file 00:11:39.574 ************************************ 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:11:39.574 06:06:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:11:39.575 06:06:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:11:39.575 06:06:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:11:39.575 06:06:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:11:39.575 06:06:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:11:39.575 06:06:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:11:39.575 06:06:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:39.575 [2024-11-27 06:06:44.508204] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:39.575 [2024-11-27 06:06:44.508298] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61756 ] 00:11:39.575 { 00:11:39.575 "subsystems": [ 00:11:39.575 { 00:11:39.575 "subsystem": "bdev", 00:11:39.575 "config": [ 00:11:39.575 { 00:11:39.575 "params": { 00:11:39.575 "block_size": 4096, 00:11:39.575 "filename": "dd_sparse_aio_disk", 00:11:39.575 "name": "dd_aio" 00:11:39.575 }, 00:11:39.575 "method": "bdev_aio_create" 00:11:39.575 }, 00:11:39.575 { 00:11:39.575 "params": { 00:11:39.575 "lvs_name": "dd_lvstore", 00:11:39.575 "bdev_name": "dd_aio" 00:11:39.575 }, 00:11:39.575 "method": "bdev_lvol_create_lvstore" 00:11:39.575 }, 00:11:39.575 { 00:11:39.575 "method": "bdev_wait_for_examine" 00:11:39.575 } 00:11:39.575 ] 00:11:39.575 } 00:11:39.575 ] 00:11:39.575 } 00:11:39.575 [2024-11-27 06:06:44.653517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.832 [2024-11-27 06:06:44.716145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.832 [2024-11-27 06:06:44.770488] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:39.832  [2024-11-27T06:06:45.188Z] Copying: 12/36 [MB] (average 923 MBps) 00:11:40.091 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:11:40.091 00:11:40.091 real 0m0.649s 00:11:40.091 user 0m0.407s 00:11:40.091 sys 0m0.345s 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:40.091 ************************************ 00:11:40.091 END TEST dd_sparse_file_to_file 00:11:40.091 ************************************ 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:40.091 ************************************ 00:11:40.091 START TEST dd_sparse_file_to_bdev 00:11:40.091 ************************************ 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:40.091 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:40.350 [2024-11-27 06:06:45.214628] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:40.350 [2024-11-27 06:06:45.214735] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61799 ] 00:11:40.350 { 00:11:40.350 "subsystems": [ 00:11:40.350 { 00:11:40.350 "subsystem": "bdev", 00:11:40.350 "config": [ 00:11:40.350 { 00:11:40.350 "params": { 00:11:40.350 "block_size": 4096, 00:11:40.350 "filename": "dd_sparse_aio_disk", 00:11:40.350 "name": "dd_aio" 00:11:40.350 }, 00:11:40.350 "method": "bdev_aio_create" 00:11:40.350 }, 00:11:40.350 { 00:11:40.350 "params": { 00:11:40.350 "lvs_name": "dd_lvstore", 00:11:40.350 "lvol_name": "dd_lvol", 00:11:40.350 "size_in_mib": 36, 00:11:40.350 "thin_provision": true 00:11:40.350 }, 00:11:40.350 "method": "bdev_lvol_create" 00:11:40.350 }, 00:11:40.350 { 00:11:40.350 "method": "bdev_wait_for_examine" 00:11:40.350 } 00:11:40.350 ] 00:11:40.350 } 00:11:40.350 ] 00:11:40.350 } 00:11:40.350 [2024-11-27 06:06:45.362315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.350 [2024-11-27 06:06:45.431940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.609 [2024-11-27 06:06:45.489689] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:40.609  [2024-11-27T06:06:45.964Z] Copying: 12/36 [MB] (average 571 MBps) 00:11:40.867 00:11:40.867 00:11:40.867 real 0m0.647s 00:11:40.867 user 0m0.418s 00:11:40.867 sys 0m0.348s 00:11:40.867 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.867 ************************************ 00:11:40.867 END TEST dd_sparse_file_to_bdev 00:11:40.867 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:40.867 ************************************ 00:11:40.867 06:06:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:11:40.867 06:06:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:40.867 06:06:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.867 06:06:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:40.867 ************************************ 00:11:40.867 START TEST dd_sparse_bdev_to_file 00:11:40.867 ************************************ 00:11:40.867 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:11:40.867 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:11:40.867 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:11:40.867 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:11:40.867 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:11:40.867 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:11:40.867 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:11:40.867 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:11:40.867 06:06:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:40.867 { 00:11:40.867 "subsystems": [ 00:11:40.867 { 00:11:40.867 "subsystem": "bdev", 00:11:40.867 "config": [ 00:11:40.867 { 00:11:40.867 "params": { 00:11:40.867 "block_size": 4096, 00:11:40.867 "filename": "dd_sparse_aio_disk", 00:11:40.867 "name": "dd_aio" 00:11:40.867 }, 00:11:40.867 "method": "bdev_aio_create" 00:11:40.867 }, 00:11:40.867 { 00:11:40.867 "method": "bdev_wait_for_examine" 00:11:40.867 } 00:11:40.867 ] 00:11:40.867 } 00:11:40.867 ] 00:11:40.867 } 00:11:40.867 [2024-11-27 06:06:45.907961] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:40.867 [2024-11-27 06:06:45.908056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61831 ] 00:11:41.125 [2024-11-27 06:06:46.049047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.125 [2024-11-27 06:06:46.114078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.125 [2024-11-27 06:06:46.169328] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:41.384  [2024-11-27T06:06:46.481Z] Copying: 12/36 [MB] (average 1000 MBps) 00:11:41.384 00:11:41.384 06:06:46 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:11:41.384 06:06:46 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:11:41.384 06:06:46 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:11:41.384 06:06:46 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:11:41.384 06:06:46 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:11:41.384 06:06:46 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:11:41.642 06:06:46 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:11:41.642 06:06:46 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:11:41.642 06:06:46 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:11:41.642 06:06:46 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:11:41.642 00:11:41.642 real 0m0.641s 00:11:41.642 user 0m0.397s 00:11:41.642 sys 0m0.345s 00:11:41.642 06:06:46 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.642 06:06:46 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:41.642 ************************************ 00:11:41.642 END TEST dd_sparse_bdev_to_file 00:11:41.642 ************************************ 00:11:41.642 06:06:46 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:11:41.642 06:06:46 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:11:41.642 06:06:46 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:11:41.642 06:06:46 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:11:41.642 06:06:46 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:11:41.642 00:11:41.642 real 0m2.309s 00:11:41.642 user 0m1.387s 00:11:41.642 sys 0m1.243s 00:11:41.642 06:06:46 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.642 ************************************ 00:11:41.642 06:06:46 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:41.642 END TEST spdk_dd_sparse 00:11:41.642 ************************************ 00:11:41.642 06:06:46 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:11:41.642 06:06:46 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:41.642 06:06:46 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.642 06:06:46 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:41.642 ************************************ 00:11:41.642 START TEST spdk_dd_negative 00:11:41.642 ************************************ 00:11:41.642 06:06:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:11:41.642 * Looking for test storage... 00:11:41.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:41.642 06:06:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:41.642 06:06:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:41.642 06:06:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:41.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.915 --rc genhtml_branch_coverage=1 00:11:41.915 --rc genhtml_function_coverage=1 00:11:41.915 --rc genhtml_legend=1 00:11:41.915 --rc geninfo_all_blocks=1 00:11:41.915 --rc geninfo_unexecuted_blocks=1 00:11:41.915 00:11:41.915 ' 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:41.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.915 --rc genhtml_branch_coverage=1 00:11:41.915 --rc genhtml_function_coverage=1 00:11:41.915 --rc genhtml_legend=1 00:11:41.915 --rc geninfo_all_blocks=1 00:11:41.915 --rc geninfo_unexecuted_blocks=1 00:11:41.915 00:11:41.915 ' 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:41.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.915 --rc genhtml_branch_coverage=1 00:11:41.915 --rc genhtml_function_coverage=1 00:11:41.915 --rc genhtml_legend=1 00:11:41.915 --rc geninfo_all_blocks=1 00:11:41.915 --rc geninfo_unexecuted_blocks=1 00:11:41.915 00:11:41.915 ' 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:41.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.915 --rc genhtml_branch_coverage=1 00:11:41.915 --rc genhtml_function_coverage=1 00:11:41.915 --rc genhtml_legend=1 00:11:41.915 --rc geninfo_all_blocks=1 00:11:41.915 --rc geninfo_unexecuted_blocks=1 00:11:41.915 00:11:41.915 ' 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:41.915 ************************************ 00:11:41.915 START TEST dd_invalid_arguments 00:11:41.915 ************************************ 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:41.915 06:06:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:11:41.915 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:11:41.915 00:11:41.915 CPU options: 00:11:41.915 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:11:41.915 (like [0,1,10]) 00:11:41.915 --lcores lcore to CPU mapping list. The list is in the format: 00:11:41.915 [<,lcores[@CPUs]>...] 00:11:41.915 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:11:41.916 Within the group, '-' is used for range separator, 00:11:41.916 ',' is used for single number separator. 00:11:41.916 '( )' can be omitted for single element group, 00:11:41.916 '@' can be omitted if cpus and lcores have the same value 00:11:41.916 --disable-cpumask-locks Disable CPU core lock files. 00:11:41.916 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:11:41.916 pollers in the app support interrupt mode) 00:11:41.916 -p, --main-core main (primary) core for DPDK 00:11:41.916 00:11:41.916 Configuration options: 00:11:41.916 -c, --config, --json JSON config file 00:11:41.916 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:11:41.916 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:11:41.916 --wait-for-rpc wait for RPCs to initialize subsystems 00:11:41.916 --rpcs-allowed comma-separated list of permitted RPCS 00:11:41.916 --json-ignore-init-errors don't exit on invalid config entry 00:11:41.916 00:11:41.916 Memory options: 00:11:41.916 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:11:41.916 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:11:41.916 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:11:41.916 -R, --huge-unlink unlink huge files after initialization 00:11:41.916 -n, --mem-channels number of memory channels used for DPDK 00:11:41.916 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:11:41.916 --msg-mempool-size global message memory pool size in count (default: 262143) 00:11:41.916 --no-huge run without using hugepages 00:11:41.916 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:11:41.916 -i, --shm-id shared memory ID (optional) 00:11:41.916 -g, --single-file-segments force creating just one hugetlbfs file 00:11:41.916 00:11:41.916 PCI options: 00:11:41.916 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:11:41.916 -B, --pci-blocked pci addr to block (can be used more than once) 00:11:41.916 -u, --no-pci disable PCI access 00:11:41.916 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:11:41.916 00:11:41.916 Log options: 00:11:41.916 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:11:41.916 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:11:41.916 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:11:41.916 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:11:41.916 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:11:41.916 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:11:41.916 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:11:41.916 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:11:41.916 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:11:41.916 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:11:41.916 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:11:41.916 --silence-noticelog disable notice level logging to stderr 00:11:41.916 00:11:41.916 Trace options: 00:11:41.916 --num-trace-entries number of trace entries for each core, must be power of 2, 00:11:41.916 setting 0 to disable trace (default 32768) 00:11:41.916 Tracepoints vary in size and can use more than one trace entry. 00:11:41.916 -e, --tpoint-group [:] 00:11:41.916 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:11:41.916 [2024-11-27 06:06:46.833796] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:11:41.916 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:11:41.916 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:11:41.916 bdev_raid, scheduler, all). 00:11:41.916 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:11:41.916 a tracepoint group. First tpoint inside a group can be enabled by 00:11:41.916 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:11:41.916 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:11:41.916 in /include/spdk_internal/trace_defs.h 00:11:41.916 00:11:41.916 Other options: 00:11:41.916 -h, --help show this usage 00:11:41.916 -v, --version print SPDK version 00:11:41.916 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:11:41.916 --env-context Opaque context for use of the env implementation 00:11:41.916 00:11:41.916 Application specific: 00:11:41.916 [--------- DD Options ---------] 00:11:41.916 --if Input file. Must specify either --if or --ib. 00:11:41.916 --ib Input bdev. Must specifier either --if or --ib 00:11:41.916 --of Output file. Must specify either --of or --ob. 00:11:41.916 --ob Output bdev. Must specify either --of or --ob. 00:11:41.916 --iflag Input file flags. 00:11:41.916 --oflag Output file flags. 00:11:41.916 --bs I/O unit size (default: 4096) 00:11:41.916 --qd Queue depth (default: 2) 00:11:41.916 --count I/O unit count. The number of I/O units to copy. (default: all) 00:11:41.916 --skip Skip this many I/O units at start of input. (default: 0) 00:11:41.916 --seek Skip this many I/O units at start of output. (default: 0) 00:11:41.916 --aio Force usage of AIO. (by default io_uring is used if available) 00:11:41.916 --sparse Enable hole skipping in input target 00:11:41.916 Available iflag and oflag values: 00:11:41.916 append - append mode 00:11:41.916 direct - use direct I/O for data 00:11:41.916 directory - fail unless a directory 00:11:41.916 dsync - use synchronized I/O for data 00:11:41.916 noatime - do not update access time 00:11:41.916 noctty - do not assign controlling terminal from file 00:11:41.916 nofollow - do not follow symlinks 00:11:41.916 nonblock - use non-blocking I/O 00:11:41.916 sync - use synchronized I/O for data and metadata 00:11:41.916 06:06:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:11:41.916 06:06:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:41.916 06:06:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:41.916 06:06:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:41.916 00:11:41.916 real 0m0.067s 00:11:41.916 user 0m0.042s 00:11:41.916 sys 0m0.025s 00:11:41.916 06:06:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.916 06:06:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:11:41.916 ************************************ 00:11:41.916 END TEST dd_invalid_arguments 00:11:41.916 ************************************ 00:11:41.916 06:06:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:11:41.916 06:06:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:41.916 06:06:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.916 06:06:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:41.916 ************************************ 00:11:41.916 START TEST dd_double_input 00:11:41.916 ************************************ 00:11:41.916 06:06:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:11:41.916 06:06:46 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:11:41.917 06:06:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:11:41.917 06:06:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:11:41.917 06:06:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:41.917 06:06:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:41.917 06:06:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:41.917 06:06:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:41.917 06:06:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:41.917 06:06:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:41.917 06:06:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:41.917 06:06:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:41.917 06:06:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:11:41.917 [2024-11-27 06:06:46.947539] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:11:41.917 06:06:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:11:41.917 06:06:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:41.917 06:06:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:41.917 06:06:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:41.917 00:11:41.917 real 0m0.069s 00:11:41.917 user 0m0.038s 00:11:41.917 sys 0m0.030s 00:11:41.917 06:06:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.917 06:06:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:11:41.917 ************************************ 00:11:41.917 END TEST dd_double_input 00:11:41.917 ************************************ 00:11:41.917 06:06:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:11:42.183 06:06:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:42.183 06:06:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.183 06:06:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:42.183 ************************************ 00:11:42.183 START TEST dd_double_output 00:11:42.183 ************************************ 00:11:42.183 06:06:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:11:42.183 06:06:47 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:11:42.183 06:06:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:11:42.183 06:06:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:11:42.183 06:06:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.183 06:06:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.183 06:06:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.183 06:06:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.183 06:06:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.183 06:06:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.183 06:06:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.183 06:06:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:42.183 06:06:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:11:42.183 [2024-11-27 06:06:47.060405] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:11:42.183 06:06:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:11:42.183 06:06:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:42.183 06:06:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:42.183 06:06:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:42.183 00:11:42.183 real 0m0.068s 00:11:42.183 user 0m0.040s 00:11:42.183 sys 0m0.028s 00:11:42.183 06:06:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.183 06:06:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:11:42.183 ************************************ 00:11:42.183 END TEST dd_double_output 00:11:42.183 ************************************ 00:11:42.183 06:06:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:11:42.183 06:06:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:42.183 06:06:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.183 06:06:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:42.184 ************************************ 00:11:42.184 START TEST dd_no_input 00:11:42.184 ************************************ 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:11:42.184 [2024-11-27 06:06:47.188545] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:42.184 00:11:42.184 real 0m0.092s 00:11:42.184 user 0m0.057s 00:11:42.184 sys 0m0.034s 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:11:42.184 ************************************ 00:11:42.184 END TEST dd_no_input 00:11:42.184 ************************************ 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:42.184 ************************************ 00:11:42.184 START TEST dd_no_output 00:11:42.184 ************************************ 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:42.184 06:06:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:42.442 [2024-11-27 06:06:47.317199] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:11:42.442 06:06:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:11:42.442 06:06:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:42.443 00:11:42.443 real 0m0.073s 00:11:42.443 user 0m0.047s 00:11:42.443 sys 0m0.025s 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:11:42.443 ************************************ 00:11:42.443 END TEST dd_no_output 00:11:42.443 ************************************ 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:42.443 ************************************ 00:11:42.443 START TEST dd_wrong_blocksize 00:11:42.443 ************************************ 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:11:42.443 [2024-11-27 06:06:47.451202] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:42.443 00:11:42.443 real 0m0.094s 00:11:42.443 user 0m0.059s 00:11:42.443 sys 0m0.033s 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:11:42.443 ************************************ 00:11:42.443 END TEST dd_wrong_blocksize 00:11:42.443 ************************************ 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:42.443 ************************************ 00:11:42.443 START TEST dd_smaller_blocksize 00:11:42.443 ************************************ 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:42.443 06:06:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:11:42.701 [2024-11-27 06:06:47.578595] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:42.701 [2024-11-27 06:06:47.578697] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62063 ] 00:11:42.702 [2024-11-27 06:06:47.722659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.702 [2024-11-27 06:06:47.795113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.959 [2024-11-27 06:06:47.854338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:43.218 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:11:43.477 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:11:43.477 [2024-11-27 06:06:48.468547] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:11:43.477 [2024-11-27 06:06:48.468620] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:43.735 [2024-11-27 06:06:48.589668] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:43.735 00:11:43.735 real 0m1.130s 00:11:43.735 user 0m0.410s 00:11:43.735 sys 0m0.611s 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:11:43.735 ************************************ 00:11:43.735 END TEST dd_smaller_blocksize 00:11:43.735 ************************************ 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:43.735 ************************************ 00:11:43.735 START TEST dd_invalid_count 00:11:43.735 ************************************ 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:11:43.735 [2024-11-27 06:06:48.760954] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:43.735 00:11:43.735 real 0m0.070s 00:11:43.735 user 0m0.046s 00:11:43.735 sys 0m0.023s 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.735 ************************************ 00:11:43.735 END TEST dd_invalid_count 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:11:43.735 ************************************ 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:43.735 ************************************ 00:11:43.735 START TEST dd_invalid_oflag 00:11:43.735 ************************************ 00:11:43.735 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:11:43.736 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:11:43.736 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:11:43.736 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:11:43.736 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.736 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:43.736 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.994 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:43.994 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.994 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:43.994 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.994 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:43.994 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:11:43.994 [2024-11-27 06:06:48.888963] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:11:43.994 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:11:43.994 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:43.994 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:43.994 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:43.994 00:11:43.994 real 0m0.084s 00:11:43.994 user 0m0.056s 00:11:43.994 sys 0m0.026s 00:11:43.994 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.994 ************************************ 00:11:43.994 END TEST dd_invalid_oflag 00:11:43.994 ************************************ 00:11:43.994 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:11:43.994 06:06:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:11:43.994 06:06:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:43.994 06:06:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.994 06:06:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:43.994 ************************************ 00:11:43.994 START TEST dd_invalid_iflag 00:11:43.994 ************************************ 00:11:43.994 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:11:43.994 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:11:43.994 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:11:43.995 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:11:43.995 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.995 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:43.995 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.995 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:43.995 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.995 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:43.995 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.995 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:43.995 06:06:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:11:43.995 [2024-11-27 06:06:49.018575] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:11:43.995 ************************************ 00:11:43.995 END TEST dd_invalid_iflag 00:11:43.995 ************************************ 00:11:43.995 06:06:49 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:11:43.995 06:06:49 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:43.995 06:06:49 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:43.995 06:06:49 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:43.995 00:11:43.995 real 0m0.079s 00:11:43.995 user 0m0.047s 00:11:43.995 sys 0m0.030s 00:11:43.995 06:06:49 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.995 06:06:49 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:11:43.995 06:06:49 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:11:43.995 06:06:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:43.995 06:06:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.995 06:06:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:43.995 ************************************ 00:11:43.995 START TEST dd_unknown_flag 00:11:43.995 ************************************ 00:11:43.995 06:06:49 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:11:43.995 06:06:49 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:11:43.995 06:06:49 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:11:43.995 06:06:49 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:11:43.995 06:06:49 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.995 06:06:49 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:43.995 06:06:49 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:44.253 06:06:49 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:44.253 06:06:49 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:44.253 06:06:49 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:44.253 06:06:49 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:44.253 06:06:49 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:44.253 06:06:49 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:11:44.253 [2024-11-27 06:06:49.144543] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:44.253 [2024-11-27 06:06:49.144650] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62161 ] 00:11:44.253 [2024-11-27 06:06:49.296415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.512 [2024-11-27 06:06:49.368642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.512 [2024-11-27 06:06:49.427021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:44.512 [2024-11-27 06:06:49.469605] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:11:44.512 [2024-11-27 06:06:49.469680] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:44.512 [2024-11-27 06:06:49.469752] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:11:44.512 [2024-11-27 06:06:49.469770] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:44.512 [2024-11-27 06:06:49.470038] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:11:44.512 [2024-11-27 06:06:49.470059] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:44.512 [2024-11-27 06:06:49.470145] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:11:44.512 [2024-11-27 06:06:49.470162] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:11:44.512 [2024-11-27 06:06:49.597908] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:44.775 00:11:44.775 real 0m0.588s 00:11:44.775 user 0m0.334s 00:11:44.775 sys 0m0.156s 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.775 ************************************ 00:11:44.775 END TEST dd_unknown_flag 00:11:44.775 ************************************ 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:44.775 ************************************ 00:11:44.775 START TEST dd_invalid_json 00:11:44.775 ************************************ 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:44.775 06:06:49 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:11:44.775 [2024-11-27 06:06:49.790349] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:44.775 [2024-11-27 06:06:49.790471] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62189 ] 00:11:45.033 [2024-11-27 06:06:49.936342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.033 [2024-11-27 06:06:50.000829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.033 [2024-11-27 06:06:50.000906] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:11:45.033 [2024-11-27 06:06:50.000924] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:45.033 [2024-11-27 06:06:50.000934] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:45.033 [2024-11-27 06:06:50.000971] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:11:45.033 ************************************ 00:11:45.033 END TEST dd_invalid_json 00:11:45.033 ************************************ 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:45.033 00:11:45.033 real 0m0.343s 00:11:45.033 user 0m0.172s 00:11:45.033 sys 0m0.067s 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:45.033 ************************************ 00:11:45.033 START TEST dd_invalid_seek 00:11:45.033 ************************************ 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:45.033 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:11:45.292 [2024-11-27 06:06:50.181841] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:45.292 { 00:11:45.292 "subsystems": [ 00:11:45.292 { 00:11:45.292 "subsystem": "bdev", 00:11:45.292 "config": [ 00:11:45.292 { 00:11:45.292 "params": { 00:11:45.292 "block_size": 512, 00:11:45.292 "num_blocks": 512, 00:11:45.292 "name": "malloc0" 00:11:45.292 }, 00:11:45.292 "method": "bdev_malloc_create" 00:11:45.292 }, 00:11:45.292 { 00:11:45.292 "params": { 00:11:45.292 "block_size": 512, 00:11:45.292 "num_blocks": 512, 00:11:45.292 "name": "malloc1" 00:11:45.292 }, 00:11:45.292 "method": "bdev_malloc_create" 00:11:45.292 }, 00:11:45.292 { 00:11:45.292 "method": "bdev_wait_for_examine" 00:11:45.292 } 00:11:45.292 ] 00:11:45.292 } 00:11:45.292 ] 00:11:45.292 } 00:11:45.292 [2024-11-27 06:06:50.182248] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62219 ] 00:11:45.292 [2024-11-27 06:06:50.332603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.551 [2024-11-27 06:06:50.401360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.551 [2024-11-27 06:06:50.459150] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:45.551 [2024-11-27 06:06:50.526898] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:11:45.551 [2024-11-27 06:06:50.526994] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:45.811 [2024-11-27 06:06:50.653513] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:45.811 00:11:45.811 real 0m0.602s 00:11:45.811 user 0m0.393s 00:11:45.811 sys 0m0.166s 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.811 ************************************ 00:11:45.811 END TEST dd_invalid_seek 00:11:45.811 ************************************ 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:45.811 ************************************ 00:11:45.811 START TEST dd_invalid_skip 00:11:45.811 ************************************ 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:45.811 06:06:50 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:11:45.811 [2024-11-27 06:06:50.825139] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:45.811 [2024-11-27 06:06:50.825373] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62252 ] 00:11:45.811 { 00:11:45.811 "subsystems": [ 00:11:45.811 { 00:11:45.811 "subsystem": "bdev", 00:11:45.811 "config": [ 00:11:45.811 { 00:11:45.811 "params": { 00:11:45.811 "block_size": 512, 00:11:45.811 "num_blocks": 512, 00:11:45.811 "name": "malloc0" 00:11:45.811 }, 00:11:45.811 "method": "bdev_malloc_create" 00:11:45.811 }, 00:11:45.811 { 00:11:45.811 "params": { 00:11:45.811 "block_size": 512, 00:11:45.811 "num_blocks": 512, 00:11:45.811 "name": "malloc1" 00:11:45.811 }, 00:11:45.811 "method": "bdev_malloc_create" 00:11:45.811 }, 00:11:45.811 { 00:11:45.811 "method": "bdev_wait_for_examine" 00:11:45.811 } 00:11:45.811 ] 00:11:45.811 } 00:11:45.811 ] 00:11:45.811 } 00:11:46.077 [2024-11-27 06:06:50.967945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.077 [2024-11-27 06:06:51.031592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.077 [2024-11-27 06:06:51.087265] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:46.077 [2024-11-27 06:06:51.151481] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:11:46.077 [2024-11-27 06:06:51.151558] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:46.336 [2024-11-27 06:06:51.274242] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:46.336 00:11:46.336 real 0m0.569s 00:11:46.336 user 0m0.366s 00:11:46.336 sys 0m0.156s 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.336 ************************************ 00:11:46.336 END TEST dd_invalid_skip 00:11:46.336 ************************************ 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:46.336 ************************************ 00:11:46.336 START TEST dd_invalid_input_count 00:11:46.336 ************************************ 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:46.336 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:11:46.595 [2024-11-27 06:06:51.460446] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:46.595 { 00:11:46.595 "subsystems": [ 00:11:46.595 { 00:11:46.595 "subsystem": "bdev", 00:11:46.595 "config": [ 00:11:46.595 { 00:11:46.595 "params": { 00:11:46.595 "block_size": 512, 00:11:46.595 "num_blocks": 512, 00:11:46.595 "name": "malloc0" 00:11:46.595 }, 00:11:46.595 "method": "bdev_malloc_create" 00:11:46.595 }, 00:11:46.595 { 00:11:46.595 "params": { 00:11:46.595 "block_size": 512, 00:11:46.595 "num_blocks": 512, 00:11:46.595 "name": "malloc1" 00:11:46.595 }, 00:11:46.595 "method": "bdev_malloc_create" 00:11:46.595 }, 00:11:46.595 { 00:11:46.595 "method": "bdev_wait_for_examine" 00:11:46.595 } 00:11:46.595 ] 00:11:46.595 } 00:11:46.595 ] 00:11:46.595 } 00:11:46.595 [2024-11-27 06:06:51.461254] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62286 ] 00:11:46.595 [2024-11-27 06:06:51.614284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.595 [2024-11-27 06:06:51.683420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.855 [2024-11-27 06:06:51.742868] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:46.855 [2024-11-27 06:06:51.811443] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:11:46.855 [2024-11-27 06:06:51.811516] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:46.855 [2024-11-27 06:06:51.935544] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:47.113 ************************************ 00:11:47.113 END TEST dd_invalid_input_count 00:11:47.113 ************************************ 00:11:47.113 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:11:47.113 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:47.113 06:06:51 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:11:47.113 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:11:47.113 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:11:47.113 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:47.113 00:11:47.113 real 0m0.605s 00:11:47.113 user 0m0.384s 00:11:47.113 sys 0m0.178s 00:11:47.113 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.113 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:11:47.113 06:06:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:11:47.114 06:06:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:47.114 06:06:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.114 06:06:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:47.114 ************************************ 00:11:47.114 START TEST dd_invalid_output_count 00:11:47.114 ************************************ 00:11:47.114 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:11:47.114 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:47.114 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:47.114 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:11:47.114 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:11:47.114 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:11:47.114 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:11:47.114 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:11:47.114 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:11:47.114 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:11:47.114 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:47.114 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:47.114 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:47.114 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:47.114 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:47.114 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:47.114 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:47.114 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:47.114 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:11:47.114 [2024-11-27 06:06:52.119458] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:47.114 [2024-11-27 06:06:52.119557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62320 ] 00:11:47.114 { 00:11:47.114 "subsystems": [ 00:11:47.114 { 00:11:47.114 "subsystem": "bdev", 00:11:47.114 "config": [ 00:11:47.114 { 00:11:47.114 "params": { 00:11:47.114 "block_size": 512, 00:11:47.114 "num_blocks": 512, 00:11:47.114 "name": "malloc0" 00:11:47.114 }, 00:11:47.114 "method": "bdev_malloc_create" 00:11:47.114 }, 00:11:47.114 { 00:11:47.114 "method": "bdev_wait_for_examine" 00:11:47.114 } 00:11:47.114 ] 00:11:47.114 } 00:11:47.114 ] 00:11:47.114 } 00:11:47.372 [2024-11-27 06:06:52.271447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.372 [2024-11-27 06:06:52.342342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.372 [2024-11-27 06:06:52.402150] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:47.372 [2024-11-27 06:06:52.464990] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:11:47.372 [2024-11-27 06:06:52.465321] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:47.631 [2024-11-27 06:06:52.599744] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:47.631 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:11:47.631 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:47.631 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:11:47.631 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:11:47.631 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:11:47.631 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:47.631 00:11:47.631 real 0m0.617s 00:11:47.631 user 0m0.413s 00:11:47.631 sys 0m0.165s 00:11:47.631 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.631 06:06:52 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:11:47.631 ************************************ 00:11:47.631 END TEST dd_invalid_output_count 00:11:47.631 ************************************ 00:11:47.631 06:06:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:11:47.631 06:06:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:47.631 06:06:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.631 06:06:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:47.631 ************************************ 00:11:47.631 START TEST dd_bs_not_multiple 00:11:47.631 ************************************ 00:11:47.631 06:06:52 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:11:47.631 06:06:52 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:47.631 06:06:52 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:47.631 06:06:52 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:11:47.631 06:06:52 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:11:47.890 06:06:52 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:11:47.890 06:06:52 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:11:47.890 06:06:52 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:11:47.890 06:06:52 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:11:47.890 06:06:52 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:11:47.890 06:06:52 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:11:47.890 06:06:52 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:47.890 06:06:52 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:11:47.890 06:06:52 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:11:47.890 06:06:52 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:47.890 06:06:52 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:47.890 06:06:52 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:47.890 06:06:52 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:47.890 06:06:52 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:47.890 06:06:52 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:47.890 06:06:52 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:47.890 06:06:52 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:11:47.890 { 00:11:47.890 "subsystems": [ 00:11:47.890 { 00:11:47.890 "subsystem": "bdev", 00:11:47.890 "config": [ 00:11:47.890 { 00:11:47.890 "params": { 00:11:47.890 "block_size": 512, 00:11:47.890 "num_blocks": 512, 00:11:47.890 "name": "malloc0" 00:11:47.890 }, 00:11:47.890 "method": "bdev_malloc_create" 00:11:47.890 }, 00:11:47.890 { 00:11:47.890 "params": { 00:11:47.890 "block_size": 512, 00:11:47.890 "num_blocks": 512, 00:11:47.890 "name": "malloc1" 00:11:47.890 }, 00:11:47.890 "method": "bdev_malloc_create" 00:11:47.890 }, 00:11:47.890 { 00:11:47.890 "method": "bdev_wait_for_examine" 00:11:47.890 } 00:11:47.890 ] 00:11:47.890 } 00:11:47.890 ] 00:11:47.890 } 00:11:47.890 [2024-11-27 06:06:52.788258] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:47.890 [2024-11-27 06:06:52.788356] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62356 ] 00:11:47.890 [2024-11-27 06:06:52.941571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.148 [2024-11-27 06:06:53.018834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.148 [2024-11-27 06:06:53.078155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:48.148 [2024-11-27 06:06:53.146220] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:11:48.148 [2024-11-27 06:06:53.146299] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:48.407 [2024-11-27 06:06:53.275002] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:48.407 06:06:53 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:11:48.407 06:06:53 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:48.407 06:06:53 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:11:48.407 06:06:53 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:11:48.407 06:06:53 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:11:48.407 06:06:53 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:48.407 00:11:48.407 real 0m0.618s 00:11:48.407 user 0m0.392s 00:11:48.407 sys 0m0.180s 00:11:48.407 ************************************ 00:11:48.407 END TEST dd_bs_not_multiple 00:11:48.407 ************************************ 00:11:48.407 06:06:53 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.407 06:06:53 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:11:48.407 ************************************ 00:11:48.407 END TEST spdk_dd_negative 00:11:48.407 ************************************ 00:11:48.407 00:11:48.407 real 0m6.783s 00:11:48.407 user 0m3.661s 00:11:48.407 sys 0m2.550s 00:11:48.407 06:06:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.407 06:06:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:48.407 ************************************ 00:11:48.407 END TEST spdk_dd 00:11:48.407 ************************************ 00:11:48.407 00:11:48.407 real 1m22.637s 00:11:48.407 user 0m53.309s 00:11:48.407 sys 0m36.415s 00:11:48.407 06:06:53 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.407 06:06:53 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:48.407 06:06:53 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:11:48.407 06:06:53 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:11:48.407 06:06:53 -- spdk/autotest.sh@260 -- # timing_exit lib 00:11:48.407 06:06:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:48.407 06:06:53 -- common/autotest_common.sh@10 -- # set +x 00:11:48.666 06:06:53 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:11:48.666 06:06:53 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:11:48.666 06:06:53 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:11:48.666 06:06:53 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:11:48.666 06:06:53 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:11:48.666 06:06:53 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:11:48.666 06:06:53 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:48.666 06:06:53 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:48.666 06:06:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.666 06:06:53 -- common/autotest_common.sh@10 -- # set +x 00:11:48.666 ************************************ 00:11:48.666 START TEST nvmf_tcp 00:11:48.666 ************************************ 00:11:48.666 06:06:53 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:48.666 * Looking for test storage... 00:11:48.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:48.666 06:06:53 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:48.666 06:06:53 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:11:48.666 06:06:53 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:48.666 06:06:53 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.666 06:06:53 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:11:48.666 06:06:53 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.666 06:06:53 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:48.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.666 --rc genhtml_branch_coverage=1 00:11:48.666 --rc genhtml_function_coverage=1 00:11:48.666 --rc genhtml_legend=1 00:11:48.666 --rc geninfo_all_blocks=1 00:11:48.666 --rc geninfo_unexecuted_blocks=1 00:11:48.666 00:11:48.666 ' 00:11:48.666 06:06:53 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:48.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.666 --rc genhtml_branch_coverage=1 00:11:48.666 --rc genhtml_function_coverage=1 00:11:48.666 --rc genhtml_legend=1 00:11:48.666 --rc geninfo_all_blocks=1 00:11:48.666 --rc geninfo_unexecuted_blocks=1 00:11:48.666 00:11:48.666 ' 00:11:48.666 06:06:53 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:48.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.666 --rc genhtml_branch_coverage=1 00:11:48.666 --rc genhtml_function_coverage=1 00:11:48.666 --rc genhtml_legend=1 00:11:48.666 --rc geninfo_all_blocks=1 00:11:48.666 --rc geninfo_unexecuted_blocks=1 00:11:48.666 00:11:48.666 ' 00:11:48.666 06:06:53 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:48.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.666 --rc genhtml_branch_coverage=1 00:11:48.666 --rc genhtml_function_coverage=1 00:11:48.666 --rc genhtml_legend=1 00:11:48.666 --rc geninfo_all_blocks=1 00:11:48.666 --rc geninfo_unexecuted_blocks=1 00:11:48.666 00:11:48.666 ' 00:11:48.666 06:06:53 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:11:48.666 06:06:53 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:11:48.666 06:06:53 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:11:48.666 06:06:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:48.666 06:06:53 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.666 06:06:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:48.666 ************************************ 00:11:48.666 START TEST nvmf_target_core 00:11:48.666 ************************************ 00:11:48.666 06:06:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:11:48.925 * Looking for test storage... 00:11:48.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.925 06:06:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:48.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.925 --rc genhtml_branch_coverage=1 00:11:48.926 --rc genhtml_function_coverage=1 00:11:48.926 --rc genhtml_legend=1 00:11:48.926 --rc geninfo_all_blocks=1 00:11:48.926 --rc geninfo_unexecuted_blocks=1 00:11:48.926 00:11:48.926 ' 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:48.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.926 --rc genhtml_branch_coverage=1 00:11:48.926 --rc genhtml_function_coverage=1 00:11:48.926 --rc genhtml_legend=1 00:11:48.926 --rc geninfo_all_blocks=1 00:11:48.926 --rc geninfo_unexecuted_blocks=1 00:11:48.926 00:11:48.926 ' 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:48.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.926 --rc genhtml_branch_coverage=1 00:11:48.926 --rc genhtml_function_coverage=1 00:11:48.926 --rc genhtml_legend=1 00:11:48.926 --rc geninfo_all_blocks=1 00:11:48.926 --rc geninfo_unexecuted_blocks=1 00:11:48.926 00:11:48.926 ' 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:48.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.926 --rc genhtml_branch_coverage=1 00:11:48.926 --rc genhtml_function_coverage=1 00:11:48.926 --rc genhtml_legend=1 00:11:48.926 --rc geninfo_all_blocks=1 00:11:48.926 --rc geninfo_unexecuted_blocks=1 00:11:48.926 00:11:48.926 ' 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:48.926 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:48.926 ************************************ 00:11:48.926 START TEST nvmf_host_management 00:11:48.926 ************************************ 00:11:48.926 06:06:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:49.185 * Looking for test storage... 00:11:49.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:49.185 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:49.185 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:11:49.185 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:49.185 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:49.185 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:49.185 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:49.185 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:49.185 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:11:49.185 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:11:49.185 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:49.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.186 --rc genhtml_branch_coverage=1 00:11:49.186 --rc genhtml_function_coverage=1 00:11:49.186 --rc genhtml_legend=1 00:11:49.186 --rc geninfo_all_blocks=1 00:11:49.186 --rc geninfo_unexecuted_blocks=1 00:11:49.186 00:11:49.186 ' 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:49.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.186 --rc genhtml_branch_coverage=1 00:11:49.186 --rc genhtml_function_coverage=1 00:11:49.186 --rc genhtml_legend=1 00:11:49.186 --rc geninfo_all_blocks=1 00:11:49.186 --rc geninfo_unexecuted_blocks=1 00:11:49.186 00:11:49.186 ' 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:49.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.186 --rc genhtml_branch_coverage=1 00:11:49.186 --rc genhtml_function_coverage=1 00:11:49.186 --rc genhtml_legend=1 00:11:49.186 --rc geninfo_all_blocks=1 00:11:49.186 --rc geninfo_unexecuted_blocks=1 00:11:49.186 00:11:49.186 ' 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:49.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.186 --rc genhtml_branch_coverage=1 00:11:49.186 --rc genhtml_function_coverage=1 00:11:49.186 --rc genhtml_legend=1 00:11:49.186 --rc geninfo_all_blocks=1 00:11:49.186 --rc geninfo_unexecuted_blocks=1 00:11:49.186 00:11:49.186 ' 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:49.186 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:49.186 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:49.187 Cannot find device "nvmf_init_br" 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:49.187 Cannot find device "nvmf_init_br2" 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:49.187 Cannot find device "nvmf_tgt_br" 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:49.187 Cannot find device "nvmf_tgt_br2" 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:49.187 Cannot find device "nvmf_init_br" 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:49.187 Cannot find device "nvmf_init_br2" 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:49.187 Cannot find device "nvmf_tgt_br" 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:49.187 Cannot find device "nvmf_tgt_br2" 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:49.187 Cannot find device "nvmf_br" 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:11:49.187 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:49.446 Cannot find device "nvmf_init_if" 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:49.446 Cannot find device "nvmf_init_if2" 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:49.446 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:49.446 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:49.446 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:49.705 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:49.705 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:49.706 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:49.706 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.122 ms 00:11:49.706 00:11:49.706 --- 10.0.0.3 ping statistics --- 00:11:49.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.706 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:49.706 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:49.706 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:11:49.706 00:11:49.706 --- 10.0.0.4 ping statistics --- 00:11:49.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.706 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:49.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:49.706 00:11:49.706 --- 10.0.0.1 ping statistics --- 00:11:49.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.706 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:49.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:11:49.706 00:11:49.706 --- 10.0.0.2 ping statistics --- 00:11:49.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.706 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:49.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62698 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62698 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62698 ']' 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:49.706 06:06:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:49.706 [2024-11-27 06:06:54.724654] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:49.706 [2024-11-27 06:06:54.725028] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.965 [2024-11-27 06:06:54.874493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:49.965 [2024-11-27 06:06:54.950748] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.965 [2024-11-27 06:06:54.951029] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.965 [2024-11-27 06:06:54.951223] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.965 [2024-11-27 06:06:54.951366] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.965 [2024-11-27 06:06:54.951401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.965 [2024-11-27 06:06:54.952620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.965 [2024-11-27 06:06:54.952759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:49.965 [2024-11-27 06:06:54.952760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.965 [2024-11-27 06:06:54.952722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.965 [2024-11-27 06:06:55.007042] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:50.247 [2024-11-27 06:06:55.128444] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:50.247 Malloc0 00:11:50.247 [2024-11-27 06:06:55.213806] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:50.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62750 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62750 /var/tmp/bdevperf.sock 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62750 ']' 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:11:50.247 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:50.248 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:11:50.248 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:50.248 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:50.248 { 00:11:50.248 "params": { 00:11:50.248 "name": "Nvme$subsystem", 00:11:50.248 "trtype": "$TEST_TRANSPORT", 00:11:50.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:50.248 "adrfam": "ipv4", 00:11:50.248 "trsvcid": "$NVMF_PORT", 00:11:50.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:50.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:50.248 "hdgst": ${hdgst:-false}, 00:11:50.248 "ddgst": ${ddgst:-false} 00:11:50.248 }, 00:11:50.248 "method": "bdev_nvme_attach_controller" 00:11:50.248 } 00:11:50.248 EOF 00:11:50.248 )") 00:11:50.248 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:11:50.248 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:11:50.248 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:11:50.248 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:50.248 "params": { 00:11:50.248 "name": "Nvme0", 00:11:50.248 "trtype": "tcp", 00:11:50.248 "traddr": "10.0.0.3", 00:11:50.248 "adrfam": "ipv4", 00:11:50.248 "trsvcid": "4420", 00:11:50.248 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:50.248 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:50.248 "hdgst": false, 00:11:50.248 "ddgst": false 00:11:50.248 }, 00:11:50.248 "method": "bdev_nvme_attach_controller" 00:11:50.248 }' 00:11:50.248 [2024-11-27 06:06:55.314166] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:50.248 [2024-11-27 06:06:55.314475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62750 ] 00:11:50.505 [2024-11-27 06:06:55.462348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.505 [2024-11-27 06:06:55.536330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.764 [2024-11-27 06:06:55.605508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:50.764 Running I/O for 10 seconds... 00:11:50.764 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.764 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:11:50.764 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:50.764 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.764 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:50.764 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.764 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:50.764 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:50.764 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:50.764 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:50.764 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:50.764 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:50.764 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:50.764 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:50.764 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:50.764 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:50.764 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.764 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:50.764 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.022 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:11:51.022 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:11:51.022 06:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:11:51.282 06:06:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:11:51.283 06:06:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:51.283 06:06:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:51.283 06:06:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.283 06:06:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:51.283 06:06:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:51.283 06:06:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.283 06:06:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=521 00:11:51.283 06:06:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 521 -ge 100 ']' 00:11:51.283 06:06:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:51.283 06:06:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:51.283 06:06:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:51.283 06:06:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:51.283 06:06:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.283 06:06:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:51.283 06:06:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.283 06:06:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:51.283 06:06:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.283 06:06:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:51.283 [2024-11-27 06:06:56.198090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.283 [2024-11-27 06:06:56.198847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.283 [2024-11-27 06:06:56.198866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.198877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.198887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.198899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.198908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.198920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.198939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.198952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.198961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.198973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.198982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.198994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.284 [2024-11-27 06:06:56.199605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d32d0 is same with the state(6) to be set 00:11:51.284 [2024-11-27 06:06:56.199831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.284 [2024-11-27 06:06:56.199851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.284 [2024-11-27 06:06:56.199879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.284 [2024-11-27 06:06:56.199899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.284 [2024-11-27 06:06:56.199918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.284 [2024-11-27 06:06:56.199928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d8ce0 is same with the state(6) to be set 00:11:51.284 [2024-11-27 06:06:56.201011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:11:51.284 06:06:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.284 task offset: 81920 on job bdev=Nvme0n1 fails 00:11:51.284 00:11:51.284 Latency(us) 00:11:51.284 [2024-11-27T06:06:56.382Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:51.285 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:51.285 Job: Nvme0n1 ended in about 0.46 seconds with error 00:11:51.285 Verification LBA range: start 0x0 length 0x400 00:11:51.285 Nvme0n1 : 0.46 1394.49 87.16 139.45 0.00 40360.42 2338.44 38606.66 00:11:51.285 [2024-11-27T06:06:56.382Z] =================================================================================================================== 00:11:51.285 [2024-11-27T06:06:56.382Z] Total : 1394.49 87.16 139.45 0.00 40360.42 2338.44 38606.66 00:11:51.285 06:06:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:51.285 [2024-11-27 06:06:56.204293] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:51.285 [2024-11-27 06:06:56.204473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d8ce0 (9): Bad file descriptor 00:11:51.285 [2024-11-27 06:06:56.216249] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:11:52.219 06:06:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62750 00:11:52.219 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62750) - No such process 00:11:52.219 06:06:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:52.219 06:06:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:52.219 06:06:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:52.219 06:06:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:52.219 06:06:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:11:52.219 06:06:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:11:52.219 06:06:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:52.219 06:06:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:52.219 { 00:11:52.219 "params": { 00:11:52.219 "name": "Nvme$subsystem", 00:11:52.219 "trtype": "$TEST_TRANSPORT", 00:11:52.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:52.219 "adrfam": "ipv4", 00:11:52.219 "trsvcid": "$NVMF_PORT", 00:11:52.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:52.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:52.219 "hdgst": ${hdgst:-false}, 00:11:52.219 "ddgst": ${ddgst:-false} 00:11:52.219 }, 00:11:52.219 "method": "bdev_nvme_attach_controller" 00:11:52.219 } 00:11:52.219 EOF 00:11:52.219 )") 00:11:52.219 06:06:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:11:52.219 06:06:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:11:52.219 06:06:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:11:52.219 06:06:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:52.219 "params": { 00:11:52.219 "name": "Nvme0", 00:11:52.219 "trtype": "tcp", 00:11:52.219 "traddr": "10.0.0.3", 00:11:52.219 "adrfam": "ipv4", 00:11:52.219 "trsvcid": "4420", 00:11:52.219 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:52.219 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:52.219 "hdgst": false, 00:11:52.219 "ddgst": false 00:11:52.219 }, 00:11:52.219 "method": "bdev_nvme_attach_controller" 00:11:52.219 }' 00:11:52.219 [2024-11-27 06:06:57.267893] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:52.219 [2024-11-27 06:06:57.268000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62790 ] 00:11:52.477 [2024-11-27 06:06:57.421144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.477 [2024-11-27 06:06:57.496899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.477 [2024-11-27 06:06:57.566629] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:52.736 Running I/O for 1 seconds... 00:11:53.670 1344.00 IOPS, 84.00 MiB/s 00:11:53.670 Latency(us) 00:11:53.670 [2024-11-27T06:06:58.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.670 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:53.670 Verification LBA range: start 0x0 length 0x400 00:11:53.670 Nvme0n1 : 1.01 1399.34 87.46 0.00 0.00 44683.35 7000.44 46470.98 00:11:53.670 [2024-11-27T06:06:58.767Z] =================================================================================================================== 00:11:53.670 [2024-11-27T06:06:58.767Z] Total : 1399.34 87.46 0.00 0.00 44683.35 7000.44 46470.98 00:11:53.929 06:06:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:53.929 06:06:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:53.929 06:06:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:11:53.929 06:06:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:11:53.929 06:06:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:53.929 06:06:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:53.930 06:06:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:11:53.930 06:06:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:53.930 06:06:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:11:53.930 06:06:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:53.930 06:06:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:53.930 rmmod nvme_tcp 00:11:53.930 rmmod nvme_fabrics 00:11:53.930 rmmod nvme_keyring 00:11:53.930 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:53.930 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:11:53.930 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:11:53.930 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62698 ']' 00:11:53.930 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62698 00:11:53.930 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62698 ']' 00:11:53.930 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62698 00:11:53.930 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:11:53.930 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.188 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62698 00:11:54.188 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:54.188 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:54.188 killing process with pid 62698 00:11:54.188 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62698' 00:11:54.188 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62698 00:11:54.188 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62698 00:11:54.188 [2024-11-27 06:06:59.280002] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:54.447 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:54.447 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:54.447 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:54.447 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:11:54.447 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:11:54.447 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:54.447 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:11:54.447 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:54.447 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:54.447 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:54.447 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:54.447 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:54.447 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:54.447 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:54.447 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:54.447 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:54.447 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:54.447 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:54.447 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:54.447 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:54.447 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:54.447 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:54.705 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:54.705 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.705 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.705 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.705 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:11:54.705 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:54.705 00:11:54.705 real 0m5.630s 00:11:54.705 user 0m19.772s 00:11:54.705 sys 0m1.540s 00:11:54.705 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.705 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:54.705 ************************************ 00:11:54.705 END TEST nvmf_host_management 00:11:54.705 ************************************ 00:11:54.705 06:06:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:54.705 06:06:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:54.705 06:06:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.705 06:06:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:54.705 ************************************ 00:11:54.705 START TEST nvmf_lvol 00:11:54.705 ************************************ 00:11:54.705 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:54.705 * Looking for test storage... 00:11:54.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:54.705 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:54.705 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:11:54.705 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:54.965 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:54.965 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:54.965 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:54.965 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:54.965 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:11:54.965 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:11:54.965 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:11:54.965 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:11:54.965 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:11:54.965 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:11:54.965 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:11:54.965 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:54.965 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:11:54.965 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:11:54.965 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:54.965 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:54.965 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:11:54.965 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:11:54.965 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:54.965 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:11:54.965 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:11:54.965 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:11:54.965 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:11:54.965 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:54.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.966 --rc genhtml_branch_coverage=1 00:11:54.966 --rc genhtml_function_coverage=1 00:11:54.966 --rc genhtml_legend=1 00:11:54.966 --rc geninfo_all_blocks=1 00:11:54.966 --rc geninfo_unexecuted_blocks=1 00:11:54.966 00:11:54.966 ' 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:54.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.966 --rc genhtml_branch_coverage=1 00:11:54.966 --rc genhtml_function_coverage=1 00:11:54.966 --rc genhtml_legend=1 00:11:54.966 --rc geninfo_all_blocks=1 00:11:54.966 --rc geninfo_unexecuted_blocks=1 00:11:54.966 00:11:54.966 ' 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:54.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.966 --rc genhtml_branch_coverage=1 00:11:54.966 --rc genhtml_function_coverage=1 00:11:54.966 --rc genhtml_legend=1 00:11:54.966 --rc geninfo_all_blocks=1 00:11:54.966 --rc geninfo_unexecuted_blocks=1 00:11:54.966 00:11:54.966 ' 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:54.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.966 --rc genhtml_branch_coverage=1 00:11:54.966 --rc genhtml_function_coverage=1 00:11:54.966 --rc genhtml_legend=1 00:11:54.966 --rc geninfo_all_blocks=1 00:11:54.966 --rc geninfo_unexecuted_blocks=1 00:11:54.966 00:11:54.966 ' 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:54.966 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:54.966 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:54.967 Cannot find device "nvmf_init_br" 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:54.967 Cannot find device "nvmf_init_br2" 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:54.967 Cannot find device "nvmf_tgt_br" 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:54.967 Cannot find device "nvmf_tgt_br2" 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:54.967 Cannot find device "nvmf_init_br" 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:54.967 Cannot find device "nvmf_init_br2" 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:54.967 Cannot find device "nvmf_tgt_br" 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:54.967 Cannot find device "nvmf_tgt_br2" 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:54.967 Cannot find device "nvmf_br" 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:54.967 Cannot find device "nvmf_init_if" 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:54.967 Cannot find device "nvmf_init_if2" 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:54.967 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:54.967 06:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:11:54.967 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:54.967 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:54.967 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:11:54.967 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:54.967 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:54.967 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:54.967 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:54.967 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:54.967 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:54.967 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:55.227 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:55.227 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:11:55.227 00:11:55.227 --- 10.0.0.3 ping statistics --- 00:11:55.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.227 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:55.227 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:55.227 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:11:55.227 00:11:55.227 --- 10.0.0.4 ping statistics --- 00:11:55.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.227 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:55.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:55.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:11:55.227 00:11:55.227 --- 10.0.0.1 ping statistics --- 00:11:55.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.227 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:55.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:55.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:11:55.227 00:11:55.227 --- 10.0.0.2 ping statistics --- 00:11:55.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.227 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=63064 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 63064 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 63064 ']' 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.227 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:55.487 [2024-11-27 06:07:00.341658] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:11:55.487 [2024-11-27 06:07:00.341752] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.487 [2024-11-27 06:07:00.495597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:55.487 [2024-11-27 06:07:00.567579] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.487 [2024-11-27 06:07:00.567643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.487 [2024-11-27 06:07:00.567658] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.487 [2024-11-27 06:07:00.567668] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.487 [2024-11-27 06:07:00.567678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.487 [2024-11-27 06:07:00.568919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.487 [2024-11-27 06:07:00.569063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.487 [2024-11-27 06:07:00.569069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.747 [2024-11-27 06:07:00.629539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:55.747 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:55.747 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:11:55.747 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:55.747 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:55.747 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:55.747 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.747 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:56.004 [2024-11-27 06:07:01.050643] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.004 06:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:56.571 06:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:56.571 06:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:56.829 06:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:56.829 06:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:57.086 06:07:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:57.343 06:07:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6a24857a-1ba7-451c-9b8e-f14925653f9c 00:11:57.343 06:07:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6a24857a-1ba7-451c-9b8e-f14925653f9c lvol 20 00:11:57.601 06:07:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3efdf055-6ce3-4064-b002-f0517bf58083 00:11:57.601 06:07:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:57.859 06:07:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3efdf055-6ce3-4064-b002-f0517bf58083 00:11:58.426 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:11:58.426 [2024-11-27 06:07:03.500647] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:58.686 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:58.686 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=63132 00:11:58.686 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:58.686 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:00.061 06:07:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 3efdf055-6ce3-4064-b002-f0517bf58083 MY_SNAPSHOT 00:12:00.061 06:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0d863858-0a9a-4c24-8210-165c0c273f53 00:12:00.061 06:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 3efdf055-6ce3-4064-b002-f0517bf58083 30 00:12:00.319 06:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 0d863858-0a9a-4c24-8210-165c0c273f53 MY_CLONE 00:12:00.887 06:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=9d8838c0-de62-4922-8b81-a91b2a8cbefd 00:12:00.887 06:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 9d8838c0-de62-4922-8b81-a91b2a8cbefd 00:12:01.454 06:07:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 63132 00:12:09.563 Initializing NVMe Controllers 00:12:09.563 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:12:09.563 Controller IO queue size 128, less than required. 00:12:09.563 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:09.563 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:09.563 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:09.563 Initialization complete. Launching workers. 00:12:09.563 ======================================================== 00:12:09.563 Latency(us) 00:12:09.563 Device Information : IOPS MiB/s Average min max 00:12:09.563 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9849.30 38.47 12996.18 2747.29 64260.76 00:12:09.563 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9782.60 38.21 13086.17 3705.84 60113.82 00:12:09.563 ======================================================== 00:12:09.563 Total : 19631.89 76.69 13041.02 2747.29 64260.76 00:12:09.563 00:12:09.563 06:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:09.563 06:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3efdf055-6ce3-4064-b002-f0517bf58083 00:12:09.822 06:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6a24857a-1ba7-451c-9b8e-f14925653f9c 00:12:10.080 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:10.080 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:10.080 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:10.080 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:10.080 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:12:10.080 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:10.080 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:12:10.080 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:10.080 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:10.080 rmmod nvme_tcp 00:12:10.338 rmmod nvme_fabrics 00:12:10.338 rmmod nvme_keyring 00:12:10.338 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:10.338 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:12:10.338 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:12:10.338 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 63064 ']' 00:12:10.338 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 63064 00:12:10.338 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 63064 ']' 00:12:10.338 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 63064 00:12:10.338 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:12:10.339 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.339 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63064 00:12:10.339 killing process with pid 63064 00:12:10.339 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:10.339 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:10.339 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63064' 00:12:10.339 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 63064 00:12:10.339 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 63064 00:12:10.597 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:10.597 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:10.597 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:10.597 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:12:10.597 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:10.597 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:12:10.597 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:12:10.597 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:10.597 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:10.597 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:10.597 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:10.597 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:10.597 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:10.597 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:10.597 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:10.597 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:10.597 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:10.597 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:10.597 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:10.597 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:10.857 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:10.857 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:10.857 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:10.857 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.857 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.857 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.857 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:12:10.857 00:12:10.857 real 0m16.146s 00:12:10.857 user 1m6.254s 00:12:10.857 sys 0m4.315s 00:12:10.857 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.857 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:10.857 ************************************ 00:12:10.857 END TEST nvmf_lvol 00:12:10.857 ************************************ 00:12:10.857 06:07:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:10.857 06:07:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:10.857 06:07:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.857 06:07:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:10.857 ************************************ 00:12:10.857 START TEST nvmf_lvs_grow 00:12:10.857 ************************************ 00:12:10.857 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:10.857 * Looking for test storage... 00:12:10.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:10.857 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:10.857 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:12:10.857 06:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:11.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.124 --rc genhtml_branch_coverage=1 00:12:11.124 --rc genhtml_function_coverage=1 00:12:11.124 --rc genhtml_legend=1 00:12:11.124 --rc geninfo_all_blocks=1 00:12:11.124 --rc geninfo_unexecuted_blocks=1 00:12:11.124 00:12:11.124 ' 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:11.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.124 --rc genhtml_branch_coverage=1 00:12:11.124 --rc genhtml_function_coverage=1 00:12:11.124 --rc genhtml_legend=1 00:12:11.124 --rc geninfo_all_blocks=1 00:12:11.124 --rc geninfo_unexecuted_blocks=1 00:12:11.124 00:12:11.124 ' 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:11.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.124 --rc genhtml_branch_coverage=1 00:12:11.124 --rc genhtml_function_coverage=1 00:12:11.124 --rc genhtml_legend=1 00:12:11.124 --rc geninfo_all_blocks=1 00:12:11.124 --rc geninfo_unexecuted_blocks=1 00:12:11.124 00:12:11.124 ' 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:11.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.124 --rc genhtml_branch_coverage=1 00:12:11.124 --rc genhtml_function_coverage=1 00:12:11.124 --rc genhtml_legend=1 00:12:11.124 --rc geninfo_all_blocks=1 00:12:11.124 --rc geninfo_unexecuted_blocks=1 00:12:11.124 00:12:11.124 ' 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:11.124 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.124 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:11.125 Cannot find device "nvmf_init_br" 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:11.125 Cannot find device "nvmf_init_br2" 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:11.125 Cannot find device "nvmf_tgt_br" 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:11.125 Cannot find device "nvmf_tgt_br2" 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:11.125 Cannot find device "nvmf_init_br" 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:11.125 Cannot find device "nvmf_init_br2" 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:11.125 Cannot find device "nvmf_tgt_br" 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:11.125 Cannot find device "nvmf_tgt_br2" 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:11.125 Cannot find device "nvmf_br" 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:11.125 Cannot find device "nvmf_init_if" 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:11.125 Cannot find device "nvmf_init_if2" 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:11.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:11.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:12:11.125 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:11.383 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:11.383 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:11.383 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 00:12:11.383 00:12:11.383 --- 10.0.0.3 ping statistics --- 00:12:11.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.384 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:12:11.384 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:11.384 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:11.384 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:12:11.384 00:12:11.384 --- 10.0.0.4 ping statistics --- 00:12:11.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.384 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:12:11.384 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:11.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:12:11.384 00:12:11.384 --- 10.0.0.1 ping statistics --- 00:12:11.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.384 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:12:11.384 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:11.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:12:11.384 00:12:11.384 --- 10.0.0.2 ping statistics --- 00:12:11.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.384 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:12:11.384 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.384 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:12:11.384 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:11.384 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.384 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:11.384 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:11.384 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.384 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:11.384 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:11.643 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:11.643 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:11.643 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:11.643 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:11.643 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63510 00:12:11.643 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:11.643 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63510 00:12:11.643 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63510 ']' 00:12:11.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.643 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.643 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.643 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.643 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.643 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:11.643 [2024-11-27 06:07:16.536158] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:12:11.643 [2024-11-27 06:07:16.536253] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.643 [2024-11-27 06:07:16.686992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.902 [2024-11-27 06:07:16.759601] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.902 [2024-11-27 06:07:16.759681] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.902 [2024-11-27 06:07:16.759697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.902 [2024-11-27 06:07:16.759708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.902 [2024-11-27 06:07:16.759718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.902 [2024-11-27 06:07:16.760232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.902 [2024-11-27 06:07:16.819544] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:11.902 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:11.902 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:12:11.902 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:11.902 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:11.902 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:11.902 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.902 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:12.162 [2024-11-27 06:07:17.231797] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.162 06:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:12.162 06:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:12.162 06:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.162 06:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:12.421 ************************************ 00:12:12.421 START TEST lvs_grow_clean 00:12:12.421 ************************************ 00:12:12.421 06:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:12:12.421 06:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:12.421 06:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:12.421 06:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:12.421 06:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:12.421 06:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:12.421 06:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:12.421 06:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:12.421 06:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:12.421 06:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:12.679 06:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:12.679 06:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:12.939 06:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8ecf961e-52f4-4e78-b5aa-e6073032b208 00:12:12.939 06:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ecf961e-52f4-4e78-b5aa-e6073032b208 00:12:12.939 06:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:13.198 06:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:13.198 06:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:13.198 06:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8ecf961e-52f4-4e78-b5aa-e6073032b208 lvol 150 00:12:13.456 06:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3b2002c0-52f8-44ab-ada7-d3c90ad423e3 00:12:13.456 06:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:13.456 06:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:13.715 [2024-11-27 06:07:18.764145] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:13.715 [2024-11-27 06:07:18.764243] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:13.715 true 00:12:13.715 06:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ecf961e-52f4-4e78-b5aa-e6073032b208 00:12:13.715 06:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:14.290 06:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:14.290 06:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:14.550 06:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3b2002c0-52f8-44ab-ada7-d3c90ad423e3 00:12:14.809 06:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:12:15.068 [2024-11-27 06:07:20.009305] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:15.068 06:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:15.326 06:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:15.327 06:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63596 00:12:15.327 06:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:15.327 06:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63596 /var/tmp/bdevperf.sock 00:12:15.327 06:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63596 ']' 00:12:15.327 06:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:15.327 06:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:15.327 06:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:15.327 06:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.327 06:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:15.327 [2024-11-27 06:07:20.410228] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:12:15.327 [2024-11-27 06:07:20.410578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63596 ] 00:12:15.586 [2024-11-27 06:07:20.561519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.586 [2024-11-27 06:07:20.634296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.845 [2024-11-27 06:07:20.695304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:15.845 06:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.845 06:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:12:15.845 06:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:16.103 Nvme0n1 00:12:16.103 06:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:16.362 [ 00:12:16.362 { 00:12:16.362 "name": "Nvme0n1", 00:12:16.362 "aliases": [ 00:12:16.362 "3b2002c0-52f8-44ab-ada7-d3c90ad423e3" 00:12:16.362 ], 00:12:16.362 "product_name": "NVMe disk", 00:12:16.362 "block_size": 4096, 00:12:16.362 "num_blocks": 38912, 00:12:16.362 "uuid": "3b2002c0-52f8-44ab-ada7-d3c90ad423e3", 00:12:16.362 "numa_id": -1, 00:12:16.362 "assigned_rate_limits": { 00:12:16.362 "rw_ios_per_sec": 0, 00:12:16.362 "rw_mbytes_per_sec": 0, 00:12:16.362 "r_mbytes_per_sec": 0, 00:12:16.362 "w_mbytes_per_sec": 0 00:12:16.362 }, 00:12:16.362 "claimed": false, 00:12:16.362 "zoned": false, 00:12:16.362 "supported_io_types": { 00:12:16.362 "read": true, 00:12:16.362 "write": true, 00:12:16.362 "unmap": true, 00:12:16.362 "flush": true, 00:12:16.362 "reset": true, 00:12:16.362 "nvme_admin": true, 00:12:16.362 "nvme_io": true, 00:12:16.362 "nvme_io_md": false, 00:12:16.362 "write_zeroes": true, 00:12:16.362 "zcopy": false, 00:12:16.362 "get_zone_info": false, 00:12:16.362 "zone_management": false, 00:12:16.362 "zone_append": false, 00:12:16.362 "compare": true, 00:12:16.362 "compare_and_write": true, 00:12:16.362 "abort": true, 00:12:16.362 "seek_hole": false, 00:12:16.362 "seek_data": false, 00:12:16.362 "copy": true, 00:12:16.362 "nvme_iov_md": false 00:12:16.362 }, 00:12:16.362 "memory_domains": [ 00:12:16.362 { 00:12:16.362 "dma_device_id": "system", 00:12:16.362 "dma_device_type": 1 00:12:16.362 } 00:12:16.362 ], 00:12:16.362 "driver_specific": { 00:12:16.362 "nvme": [ 00:12:16.362 { 00:12:16.362 "trid": { 00:12:16.362 "trtype": "TCP", 00:12:16.362 "adrfam": "IPv4", 00:12:16.362 "traddr": "10.0.0.3", 00:12:16.362 "trsvcid": "4420", 00:12:16.362 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:16.362 }, 00:12:16.362 "ctrlr_data": { 00:12:16.362 "cntlid": 1, 00:12:16.362 "vendor_id": "0x8086", 00:12:16.362 "model_number": "SPDK bdev Controller", 00:12:16.362 "serial_number": "SPDK0", 00:12:16.362 "firmware_revision": "25.01", 00:12:16.362 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:16.362 "oacs": { 00:12:16.362 "security": 0, 00:12:16.362 "format": 0, 00:12:16.362 "firmware": 0, 00:12:16.362 "ns_manage": 0 00:12:16.362 }, 00:12:16.362 "multi_ctrlr": true, 00:12:16.362 "ana_reporting": false 00:12:16.362 }, 00:12:16.362 "vs": { 00:12:16.362 "nvme_version": "1.3" 00:12:16.362 }, 00:12:16.362 "ns_data": { 00:12:16.362 "id": 1, 00:12:16.362 "can_share": true 00:12:16.362 } 00:12:16.362 } 00:12:16.362 ], 00:12:16.362 "mp_policy": "active_passive" 00:12:16.362 } 00:12:16.362 } 00:12:16.362 ] 00:12:16.362 06:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63612 00:12:16.362 06:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:16.362 06:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:16.621 Running I/O for 10 seconds... 00:12:17.556 Latency(us) 00:12:17.556 [2024-11-27T06:07:22.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:17.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:17.556 Nvme0n1 : 1.00 6868.00 26.83 0.00 0.00 0.00 0.00 0.00 00:12:17.556 [2024-11-27T06:07:22.653Z] =================================================================================================================== 00:12:17.556 [2024-11-27T06:07:22.653Z] Total : 6868.00 26.83 0.00 0.00 0.00 0.00 0.00 00:12:17.556 00:12:18.491 06:07:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8ecf961e-52f4-4e78-b5aa-e6073032b208 00:12:18.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:18.491 Nvme0n1 : 2.00 6799.50 26.56 0.00 0.00 0.00 0.00 0.00 00:12:18.491 [2024-11-27T06:07:23.588Z] =================================================================================================================== 00:12:18.491 [2024-11-27T06:07:23.588Z] Total : 6799.50 26.56 0.00 0.00 0.00 0.00 0.00 00:12:18.491 00:12:18.750 true 00:12:18.750 06:07:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:18.750 06:07:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ecf961e-52f4-4e78-b5aa-e6073032b208 00:12:19.009 06:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:19.009 06:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:19.009 06:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63612 00:12:19.576 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:19.576 Nvme0n1 : 3.00 6734.33 26.31 0.00 0.00 0.00 0.00 0.00 00:12:19.576 [2024-11-27T06:07:24.673Z] =================================================================================================================== 00:12:19.576 [2024-11-27T06:07:24.673Z] Total : 6734.33 26.31 0.00 0.00 0.00 0.00 0.00 00:12:19.576 00:12:20.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:20.511 Nvme0n1 : 4.00 6412.25 25.05 0.00 0.00 0.00 0.00 0.00 00:12:20.511 [2024-11-27T06:07:25.608Z] =================================================================================================================== 00:12:20.511 [2024-11-27T06:07:25.608Z] Total : 6412.25 25.05 0.00 0.00 0.00 0.00 0.00 00:12:20.511 00:12:21.886 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:21.886 Nvme0n1 : 5.00 6445.80 25.18 0.00 0.00 0.00 0.00 0.00 00:12:21.886 [2024-11-27T06:07:26.983Z] =================================================================================================================== 00:12:21.886 [2024-11-27T06:07:26.983Z] Total : 6445.80 25.18 0.00 0.00 0.00 0.00 0.00 00:12:21.886 00:12:22.472 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:22.472 Nvme0n1 : 6.00 6514.50 25.45 0.00 0.00 0.00 0.00 0.00 00:12:22.472 [2024-11-27T06:07:27.569Z] =================================================================================================================== 00:12:22.472 [2024-11-27T06:07:27.569Z] Total : 6514.50 25.45 0.00 0.00 0.00 0.00 0.00 00:12:22.472 00:12:23.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:23.849 Nvme0n1 : 7.00 6545.43 25.57 0.00 0.00 0.00 0.00 0.00 00:12:23.849 [2024-11-27T06:07:28.946Z] =================================================================================================================== 00:12:23.849 [2024-11-27T06:07:28.946Z] Total : 6545.43 25.57 0.00 0.00 0.00 0.00 0.00 00:12:23.849 00:12:24.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:24.786 Nvme0n1 : 8.00 6568.62 25.66 0.00 0.00 0.00 0.00 0.00 00:12:24.786 [2024-11-27T06:07:29.883Z] =================================================================================================================== 00:12:24.786 [2024-11-27T06:07:29.883Z] Total : 6568.62 25.66 0.00 0.00 0.00 0.00 0.00 00:12:24.786 00:12:25.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:25.723 Nvme0n1 : 9.00 6600.78 25.78 0.00 0.00 0.00 0.00 0.00 00:12:25.723 [2024-11-27T06:07:30.820Z] =================================================================================================================== 00:12:25.723 [2024-11-27T06:07:30.820Z] Total : 6600.78 25.78 0.00 0.00 0.00 0.00 0.00 00:12:25.723 00:12:26.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:26.660 Nvme0n1 : 10.00 6601.10 25.79 0.00 0.00 0.00 0.00 0.00 00:12:26.660 [2024-11-27T06:07:31.757Z] =================================================================================================================== 00:12:26.660 [2024-11-27T06:07:31.757Z] Total : 6601.10 25.79 0.00 0.00 0.00 0.00 0.00 00:12:26.660 00:12:26.660 00:12:26.660 Latency(us) 00:12:26.660 [2024-11-27T06:07:31.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:26.660 Nvme0n1 : 10.02 6601.49 25.79 0.00 0.00 19384.38 4825.83 219247.71 00:12:26.660 [2024-11-27T06:07:31.757Z] =================================================================================================================== 00:12:26.660 [2024-11-27T06:07:31.757Z] Total : 6601.49 25.79 0.00 0.00 19384.38 4825.83 219247.71 00:12:26.660 { 00:12:26.660 "results": [ 00:12:26.660 { 00:12:26.660 "job": "Nvme0n1", 00:12:26.660 "core_mask": "0x2", 00:12:26.660 "workload": "randwrite", 00:12:26.660 "status": "finished", 00:12:26.660 "queue_depth": 128, 00:12:26.660 "io_size": 4096, 00:12:26.660 "runtime": 10.018793, 00:12:26.660 "iops": 6601.493812677834, 00:12:26.660 "mibps": 25.78708520577279, 00:12:26.660 "io_failed": 0, 00:12:26.660 "io_timeout": 0, 00:12:26.660 "avg_latency_us": 19384.378819593443, 00:12:26.660 "min_latency_us": 4825.832727272727, 00:12:26.660 "max_latency_us": 219247.70909090908 00:12:26.660 } 00:12:26.660 ], 00:12:26.660 "core_count": 1 00:12:26.660 } 00:12:26.660 06:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63596 00:12:26.660 06:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63596 ']' 00:12:26.660 06:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63596 00:12:26.660 06:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:12:26.660 06:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.660 06:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63596 00:12:26.660 killing process with pid 63596 00:12:26.660 Received shutdown signal, test time was about 10.000000 seconds 00:12:26.660 00:12:26.660 Latency(us) 00:12:26.660 [2024-11-27T06:07:31.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.660 [2024-11-27T06:07:31.757Z] =================================================================================================================== 00:12:26.660 [2024-11-27T06:07:31.757Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:26.660 06:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:26.660 06:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:26.660 06:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63596' 00:12:26.660 06:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63596 00:12:26.660 06:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63596 00:12:26.918 06:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:27.177 06:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:27.744 06:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ecf961e-52f4-4e78-b5aa-e6073032b208 00:12:27.744 06:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:28.002 06:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:28.002 06:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:28.002 06:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:28.261 [2024-11-27 06:07:33.130774] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:28.261 06:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ecf961e-52f4-4e78-b5aa-e6073032b208 00:12:28.261 06:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:12:28.261 06:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ecf961e-52f4-4e78-b5aa-e6073032b208 00:12:28.261 06:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:28.261 06:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:28.261 06:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:28.261 06:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:28.261 06:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:28.261 06:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:28.261 06:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:28.261 06:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:28.261 06:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ecf961e-52f4-4e78-b5aa-e6073032b208 00:12:28.520 request: 00:12:28.520 { 00:12:28.520 "uuid": "8ecf961e-52f4-4e78-b5aa-e6073032b208", 00:12:28.520 "method": "bdev_lvol_get_lvstores", 00:12:28.520 "req_id": 1 00:12:28.520 } 00:12:28.520 Got JSON-RPC error response 00:12:28.520 response: 00:12:28.520 { 00:12:28.520 "code": -19, 00:12:28.520 "message": "No such device" 00:12:28.520 } 00:12:28.520 06:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:12:28.520 06:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:28.520 06:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:28.520 06:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:28.520 06:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:28.780 aio_bdev 00:12:28.780 06:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3b2002c0-52f8-44ab-ada7-d3c90ad423e3 00:12:28.780 06:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=3b2002c0-52f8-44ab-ada7-d3c90ad423e3 00:12:28.780 06:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:28.780 06:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:12:28.780 06:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:28.780 06:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:28.780 06:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:29.347 06:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3b2002c0-52f8-44ab-ada7-d3c90ad423e3 -t 2000 00:12:29.605 [ 00:12:29.605 { 00:12:29.605 "name": "3b2002c0-52f8-44ab-ada7-d3c90ad423e3", 00:12:29.605 "aliases": [ 00:12:29.605 "lvs/lvol" 00:12:29.605 ], 00:12:29.605 "product_name": "Logical Volume", 00:12:29.605 "block_size": 4096, 00:12:29.605 "num_blocks": 38912, 00:12:29.605 "uuid": "3b2002c0-52f8-44ab-ada7-d3c90ad423e3", 00:12:29.605 "assigned_rate_limits": { 00:12:29.605 "rw_ios_per_sec": 0, 00:12:29.605 "rw_mbytes_per_sec": 0, 00:12:29.605 "r_mbytes_per_sec": 0, 00:12:29.605 "w_mbytes_per_sec": 0 00:12:29.605 }, 00:12:29.605 "claimed": false, 00:12:29.605 "zoned": false, 00:12:29.605 "supported_io_types": { 00:12:29.605 "read": true, 00:12:29.605 "write": true, 00:12:29.605 "unmap": true, 00:12:29.606 "flush": false, 00:12:29.606 "reset": true, 00:12:29.606 "nvme_admin": false, 00:12:29.606 "nvme_io": false, 00:12:29.606 "nvme_io_md": false, 00:12:29.606 "write_zeroes": true, 00:12:29.606 "zcopy": false, 00:12:29.606 "get_zone_info": false, 00:12:29.606 "zone_management": false, 00:12:29.606 "zone_append": false, 00:12:29.606 "compare": false, 00:12:29.606 "compare_and_write": false, 00:12:29.606 "abort": false, 00:12:29.606 "seek_hole": true, 00:12:29.606 "seek_data": true, 00:12:29.606 "copy": false, 00:12:29.606 "nvme_iov_md": false 00:12:29.606 }, 00:12:29.606 "driver_specific": { 00:12:29.606 "lvol": { 00:12:29.606 "lvol_store_uuid": "8ecf961e-52f4-4e78-b5aa-e6073032b208", 00:12:29.606 "base_bdev": "aio_bdev", 00:12:29.606 "thin_provision": false, 00:12:29.606 "num_allocated_clusters": 38, 00:12:29.606 "snapshot": false, 00:12:29.606 "clone": false, 00:12:29.606 "esnap_clone": false 00:12:29.606 } 00:12:29.606 } 00:12:29.606 } 00:12:29.606 ] 00:12:29.606 06:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:12:29.606 06:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:29.606 06:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ecf961e-52f4-4e78-b5aa-e6073032b208 00:12:29.864 06:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:29.864 06:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ecf961e-52f4-4e78-b5aa-e6073032b208 00:12:29.864 06:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:30.124 06:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:30.124 06:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3b2002c0-52f8-44ab-ada7-d3c90ad423e3 00:12:30.383 06:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8ecf961e-52f4-4e78-b5aa-e6073032b208 00:12:30.641 06:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:30.901 06:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:31.473 ************************************ 00:12:31.473 END TEST lvs_grow_clean 00:12:31.473 ************************************ 00:12:31.473 00:12:31.473 real 0m19.043s 00:12:31.473 user 0m17.788s 00:12:31.473 sys 0m2.666s 00:12:31.474 06:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.474 06:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:31.474 06:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:31.474 06:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:31.474 06:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.474 06:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:31.474 ************************************ 00:12:31.474 START TEST lvs_grow_dirty 00:12:31.474 ************************************ 00:12:31.474 06:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:12:31.474 06:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:31.474 06:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:31.474 06:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:31.474 06:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:31.474 06:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:31.474 06:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:31.474 06:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:31.474 06:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:31.474 06:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:31.732 06:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:31.732 06:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:31.991 06:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2ca6f8a2-10af-4e4f-9512-ee13fe4a0ffc 00:12:31.991 06:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:31.991 06:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ca6f8a2-10af-4e4f-9512-ee13fe4a0ffc 00:12:32.557 06:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:32.557 06:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:32.557 06:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2ca6f8a2-10af-4e4f-9512-ee13fe4a0ffc lvol 150 00:12:32.557 06:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=568526a7-82bb-4c04-960e-bb7c035b4452 00:12:32.557 06:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:32.557 06:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:32.815 [2024-11-27 06:07:37.868977] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:32.815 [2024-11-27 06:07:37.869091] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:32.815 true 00:12:32.815 06:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ca6f8a2-10af-4e4f-9512-ee13fe4a0ffc 00:12:32.815 06:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:33.382 06:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:33.382 06:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:33.639 06:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 568526a7-82bb-4c04-960e-bb7c035b4452 00:12:33.897 06:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:12:34.155 [2024-11-27 06:07:39.121609] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:34.155 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:34.412 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:34.412 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63873 00:12:34.412 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:34.412 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63873 /var/tmp/bdevperf.sock 00:12:34.412 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63873 ']' 00:12:34.412 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:34.412 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:34.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:34.412 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:34.412 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:34.412 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:34.412 [2024-11-27 06:07:39.473999] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:12:34.412 [2024-11-27 06:07:39.474098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63873 ] 00:12:34.669 [2024-11-27 06:07:39.628619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.669 [2024-11-27 06:07:39.720497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.927 [2024-11-27 06:07:39.802664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:34.927 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:34.927 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:12:34.927 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:35.186 Nvme0n1 00:12:35.186 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:35.445 [ 00:12:35.445 { 00:12:35.445 "name": "Nvme0n1", 00:12:35.445 "aliases": [ 00:12:35.445 "568526a7-82bb-4c04-960e-bb7c035b4452" 00:12:35.445 ], 00:12:35.445 "product_name": "NVMe disk", 00:12:35.445 "block_size": 4096, 00:12:35.445 "num_blocks": 38912, 00:12:35.445 "uuid": "568526a7-82bb-4c04-960e-bb7c035b4452", 00:12:35.445 "numa_id": -1, 00:12:35.445 "assigned_rate_limits": { 00:12:35.445 "rw_ios_per_sec": 0, 00:12:35.445 "rw_mbytes_per_sec": 0, 00:12:35.445 "r_mbytes_per_sec": 0, 00:12:35.445 "w_mbytes_per_sec": 0 00:12:35.445 }, 00:12:35.445 "claimed": false, 00:12:35.445 "zoned": false, 00:12:35.445 "supported_io_types": { 00:12:35.445 "read": true, 00:12:35.445 "write": true, 00:12:35.445 "unmap": true, 00:12:35.445 "flush": true, 00:12:35.445 "reset": true, 00:12:35.445 "nvme_admin": true, 00:12:35.445 "nvme_io": true, 00:12:35.445 "nvme_io_md": false, 00:12:35.445 "write_zeroes": true, 00:12:35.445 "zcopy": false, 00:12:35.445 "get_zone_info": false, 00:12:35.445 "zone_management": false, 00:12:35.445 "zone_append": false, 00:12:35.445 "compare": true, 00:12:35.445 "compare_and_write": true, 00:12:35.445 "abort": true, 00:12:35.445 "seek_hole": false, 00:12:35.445 "seek_data": false, 00:12:35.445 "copy": true, 00:12:35.445 "nvme_iov_md": false 00:12:35.445 }, 00:12:35.445 "memory_domains": [ 00:12:35.445 { 00:12:35.445 "dma_device_id": "system", 00:12:35.445 "dma_device_type": 1 00:12:35.445 } 00:12:35.445 ], 00:12:35.445 "driver_specific": { 00:12:35.445 "nvme": [ 00:12:35.445 { 00:12:35.445 "trid": { 00:12:35.445 "trtype": "TCP", 00:12:35.445 "adrfam": "IPv4", 00:12:35.445 "traddr": "10.0.0.3", 00:12:35.445 "trsvcid": "4420", 00:12:35.445 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:35.445 }, 00:12:35.445 "ctrlr_data": { 00:12:35.445 "cntlid": 1, 00:12:35.445 "vendor_id": "0x8086", 00:12:35.445 "model_number": "SPDK bdev Controller", 00:12:35.445 "serial_number": "SPDK0", 00:12:35.445 "firmware_revision": "25.01", 00:12:35.445 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:35.445 "oacs": { 00:12:35.445 "security": 0, 00:12:35.445 "format": 0, 00:12:35.445 "firmware": 0, 00:12:35.445 "ns_manage": 0 00:12:35.445 }, 00:12:35.445 "multi_ctrlr": true, 00:12:35.445 "ana_reporting": false 00:12:35.445 }, 00:12:35.445 "vs": { 00:12:35.445 "nvme_version": "1.3" 00:12:35.445 }, 00:12:35.445 "ns_data": { 00:12:35.445 "id": 1, 00:12:35.445 "can_share": true 00:12:35.445 } 00:12:35.445 } 00:12:35.445 ], 00:12:35.445 "mp_policy": "active_passive" 00:12:35.445 } 00:12:35.445 } 00:12:35.445 ] 00:12:35.445 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63889 00:12:35.445 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:35.445 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:35.704 Running I/O for 10 seconds... 00:12:36.639 Latency(us) 00:12:36.639 [2024-11-27T06:07:41.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:36.639 Nvme0n1 : 1.00 6967.00 27.21 0.00 0.00 0.00 0.00 0.00 00:12:36.639 [2024-11-27T06:07:41.736Z] =================================================================================================================== 00:12:36.639 [2024-11-27T06:07:41.736Z] Total : 6967.00 27.21 0.00 0.00 0.00 0.00 0.00 00:12:36.639 00:12:37.573 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2ca6f8a2-10af-4e4f-9512-ee13fe4a0ffc 00:12:37.574 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:37.574 Nvme0n1 : 2.00 7039.50 27.50 0.00 0.00 0.00 0.00 0.00 00:12:37.574 [2024-11-27T06:07:42.671Z] =================================================================================================================== 00:12:37.574 [2024-11-27T06:07:42.671Z] Total : 7039.50 27.50 0.00 0.00 0.00 0.00 0.00 00:12:37.574 00:12:37.831 true 00:12:37.831 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ca6f8a2-10af-4e4f-9512-ee13fe4a0ffc 00:12:37.831 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:38.106 06:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:38.106 06:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:38.106 06:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63889 00:12:38.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:38.745 Nvme0n1 : 3.00 7021.33 27.43 0.00 0.00 0.00 0.00 0.00 00:12:38.745 [2024-11-27T06:07:43.842Z] =================================================================================================================== 00:12:38.745 [2024-11-27T06:07:43.842Z] Total : 7021.33 27.43 0.00 0.00 0.00 0.00 0.00 00:12:38.745 00:12:39.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:39.678 Nvme0n1 : 4.00 6980.50 27.27 0.00 0.00 0.00 0.00 0.00 00:12:39.678 [2024-11-27T06:07:44.775Z] =================================================================================================================== 00:12:39.678 [2024-11-27T06:07:44.775Z] Total : 6980.50 27.27 0.00 0.00 0.00 0.00 0.00 00:12:39.678 00:12:40.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:40.613 Nvme0n1 : 5.00 6956.00 27.17 0.00 0.00 0.00 0.00 0.00 00:12:40.613 [2024-11-27T06:07:45.710Z] =================================================================================================================== 00:12:40.613 [2024-11-27T06:07:45.710Z] Total : 6956.00 27.17 0.00 0.00 0.00 0.00 0.00 00:12:40.613 00:12:41.988 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:41.988 Nvme0n1 : 6.00 6946.67 27.14 0.00 0.00 0.00 0.00 0.00 00:12:41.988 [2024-11-27T06:07:47.085Z] =================================================================================================================== 00:12:41.988 [2024-11-27T06:07:47.085Z] Total : 6946.67 27.14 0.00 0.00 0.00 0.00 0.00 00:12:41.988 00:12:42.555 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:42.555 Nvme0n1 : 7.00 6710.57 26.21 0.00 0.00 0.00 0.00 0.00 00:12:42.555 [2024-11-27T06:07:47.652Z] =================================================================================================================== 00:12:42.555 [2024-11-27T06:07:47.652Z] Total : 6710.57 26.21 0.00 0.00 0.00 0.00 0.00 00:12:42.555 00:12:43.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:43.930 Nvme0n1 : 8.00 6681.38 26.10 0.00 0.00 0.00 0.00 0.00 00:12:43.930 [2024-11-27T06:07:49.027Z] =================================================================================================================== 00:12:43.930 [2024-11-27T06:07:49.027Z] Total : 6681.38 26.10 0.00 0.00 0.00 0.00 0.00 00:12:43.930 00:12:44.866 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:44.866 Nvme0n1 : 9.00 6658.67 26.01 0.00 0.00 0.00 0.00 0.00 00:12:44.866 [2024-11-27T06:07:49.963Z] =================================================================================================================== 00:12:44.866 [2024-11-27T06:07:49.963Z] Total : 6658.67 26.01 0.00 0.00 0.00 0.00 0.00 00:12:44.866 00:12:45.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:45.804 Nvme0n1 : 10.00 6640.50 25.94 0.00 0.00 0.00 0.00 0.00 00:12:45.804 [2024-11-27T06:07:50.901Z] =================================================================================================================== 00:12:45.804 [2024-11-27T06:07:50.901Z] Total : 6640.50 25.94 0.00 0.00 0.00 0.00 0.00 00:12:45.804 00:12:45.804 00:12:45.804 Latency(us) 00:12:45.804 [2024-11-27T06:07:50.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:45.804 Nvme0n1 : 10.01 6649.64 25.98 0.00 0.00 19242.48 8638.84 245938.73 00:12:45.804 [2024-11-27T06:07:50.901Z] =================================================================================================================== 00:12:45.804 [2024-11-27T06:07:50.901Z] Total : 6649.64 25.98 0.00 0.00 19242.48 8638.84 245938.73 00:12:45.804 { 00:12:45.804 "results": [ 00:12:45.804 { 00:12:45.804 "job": "Nvme0n1", 00:12:45.804 "core_mask": "0x2", 00:12:45.804 "workload": "randwrite", 00:12:45.804 "status": "finished", 00:12:45.804 "queue_depth": 128, 00:12:45.804 "io_size": 4096, 00:12:45.804 "runtime": 10.005509, 00:12:45.804 "iops": 6649.636715133633, 00:12:45.804 "mibps": 25.975143418490752, 00:12:45.804 "io_failed": 0, 00:12:45.804 "io_timeout": 0, 00:12:45.804 "avg_latency_us": 19242.47802837416, 00:12:45.804 "min_latency_us": 8638.836363636363, 00:12:45.804 "max_latency_us": 245938.73454545456 00:12:45.804 } 00:12:45.804 ], 00:12:45.804 "core_count": 1 00:12:45.804 } 00:12:45.804 06:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63873 00:12:45.804 06:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63873 ']' 00:12:45.804 06:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63873 00:12:45.804 06:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:12:45.804 06:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.804 06:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63873 00:12:45.804 killing process with pid 63873 00:12:45.804 Received shutdown signal, test time was about 10.000000 seconds 00:12:45.804 00:12:45.804 Latency(us) 00:12:45.804 [2024-11-27T06:07:50.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.804 [2024-11-27T06:07:50.901Z] =================================================================================================================== 00:12:45.804 [2024-11-27T06:07:50.901Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:45.804 06:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:45.804 06:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:45.804 06:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63873' 00:12:45.804 06:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63873 00:12:45.804 06:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63873 00:12:46.061 06:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:46.317 06:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:46.618 06:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ca6f8a2-10af-4e4f-9512-ee13fe4a0ffc 00:12:46.618 06:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:47.186 06:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:47.186 06:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:47.186 06:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63510 00:12:47.186 06:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63510 00:12:47.186 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63510 Killed "${NVMF_APP[@]}" "$@" 00:12:47.186 06:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:47.186 06:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:47.186 06:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:47.186 06:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:47.186 06:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:47.186 06:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=64028 00:12:47.186 06:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:47.186 06:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 64028 00:12:47.186 06:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 64028 ']' 00:12:47.186 06:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.186 06:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:47.186 06:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.186 06:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:47.186 06:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:47.186 [2024-11-27 06:07:52.078876] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:12:47.186 [2024-11-27 06:07:52.078964] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.186 [2024-11-27 06:07:52.227604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.445 [2024-11-27 06:07:52.311354] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.445 [2024-11-27 06:07:52.311775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.445 [2024-11-27 06:07:52.311876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.445 [2024-11-27 06:07:52.311961] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.445 [2024-11-27 06:07:52.312100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.445 [2024-11-27 06:07:52.312757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.445 [2024-11-27 06:07:52.397598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:48.011 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:48.011 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:12:48.011 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:48.011 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:48.011 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:48.268 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.268 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:48.526 [2024-11-27 06:07:53.472565] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:48.526 [2024-11-27 06:07:53.475351] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:48.526 [2024-11-27 06:07:53.476532] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:48.526 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:48.526 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 568526a7-82bb-4c04-960e-bb7c035b4452 00:12:48.526 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=568526a7-82bb-4c04-960e-bb7c035b4452 00:12:48.526 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:48.526 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:12:48.526 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:48.526 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:48.526 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:48.784 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 568526a7-82bb-4c04-960e-bb7c035b4452 -t 2000 00:12:49.041 [ 00:12:49.041 { 00:12:49.041 "name": "568526a7-82bb-4c04-960e-bb7c035b4452", 00:12:49.041 "aliases": [ 00:12:49.041 "lvs/lvol" 00:12:49.041 ], 00:12:49.041 "product_name": "Logical Volume", 00:12:49.041 "block_size": 4096, 00:12:49.041 "num_blocks": 38912, 00:12:49.041 "uuid": "568526a7-82bb-4c04-960e-bb7c035b4452", 00:12:49.041 "assigned_rate_limits": { 00:12:49.041 "rw_ios_per_sec": 0, 00:12:49.041 "rw_mbytes_per_sec": 0, 00:12:49.041 "r_mbytes_per_sec": 0, 00:12:49.041 "w_mbytes_per_sec": 0 00:12:49.041 }, 00:12:49.041 "claimed": false, 00:12:49.041 "zoned": false, 00:12:49.041 "supported_io_types": { 00:12:49.042 "read": true, 00:12:49.042 "write": true, 00:12:49.042 "unmap": true, 00:12:49.042 "flush": false, 00:12:49.042 "reset": true, 00:12:49.042 "nvme_admin": false, 00:12:49.042 "nvme_io": false, 00:12:49.042 "nvme_io_md": false, 00:12:49.042 "write_zeroes": true, 00:12:49.042 "zcopy": false, 00:12:49.042 "get_zone_info": false, 00:12:49.042 "zone_management": false, 00:12:49.042 "zone_append": false, 00:12:49.042 "compare": false, 00:12:49.042 "compare_and_write": false, 00:12:49.042 "abort": false, 00:12:49.042 "seek_hole": true, 00:12:49.042 "seek_data": true, 00:12:49.042 "copy": false, 00:12:49.042 "nvme_iov_md": false 00:12:49.042 }, 00:12:49.042 "driver_specific": { 00:12:49.042 "lvol": { 00:12:49.042 "lvol_store_uuid": "2ca6f8a2-10af-4e4f-9512-ee13fe4a0ffc", 00:12:49.042 "base_bdev": "aio_bdev", 00:12:49.042 "thin_provision": false, 00:12:49.042 "num_allocated_clusters": 38, 00:12:49.042 "snapshot": false, 00:12:49.042 "clone": false, 00:12:49.042 "esnap_clone": false 00:12:49.042 } 00:12:49.042 } 00:12:49.042 } 00:12:49.042 ] 00:12:49.042 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:12:49.042 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:49.042 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ca6f8a2-10af-4e4f-9512-ee13fe4a0ffc 00:12:49.299 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:49.299 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ca6f8a2-10af-4e4f-9512-ee13fe4a0ffc 00:12:49.299 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:49.866 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:49.866 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:49.866 [2024-11-27 06:07:54.918315] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:50.124 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ca6f8a2-10af-4e4f-9512-ee13fe4a0ffc 00:12:50.125 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:12:50.125 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ca6f8a2-10af-4e4f-9512-ee13fe4a0ffc 00:12:50.125 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:50.125 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:50.125 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:50.125 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:50.125 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:50.125 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:50.125 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:50.125 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:50.125 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ca6f8a2-10af-4e4f-9512-ee13fe4a0ffc 00:12:50.384 request: 00:12:50.384 { 00:12:50.384 "uuid": "2ca6f8a2-10af-4e4f-9512-ee13fe4a0ffc", 00:12:50.384 "method": "bdev_lvol_get_lvstores", 00:12:50.384 "req_id": 1 00:12:50.384 } 00:12:50.384 Got JSON-RPC error response 00:12:50.384 response: 00:12:50.384 { 00:12:50.384 "code": -19, 00:12:50.384 "message": "No such device" 00:12:50.384 } 00:12:50.384 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:12:50.384 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:50.384 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:50.384 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:50.384 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:50.642 aio_bdev 00:12:50.642 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 568526a7-82bb-4c04-960e-bb7c035b4452 00:12:50.642 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=568526a7-82bb-4c04-960e-bb7c035b4452 00:12:50.642 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:50.642 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:12:50.642 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:50.642 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:50.642 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:50.901 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 568526a7-82bb-4c04-960e-bb7c035b4452 -t 2000 00:12:51.162 [ 00:12:51.162 { 00:12:51.162 "name": "568526a7-82bb-4c04-960e-bb7c035b4452", 00:12:51.162 "aliases": [ 00:12:51.162 "lvs/lvol" 00:12:51.162 ], 00:12:51.162 "product_name": "Logical Volume", 00:12:51.162 "block_size": 4096, 00:12:51.162 "num_blocks": 38912, 00:12:51.162 "uuid": "568526a7-82bb-4c04-960e-bb7c035b4452", 00:12:51.162 "assigned_rate_limits": { 00:12:51.162 "rw_ios_per_sec": 0, 00:12:51.162 "rw_mbytes_per_sec": 0, 00:12:51.162 "r_mbytes_per_sec": 0, 00:12:51.162 "w_mbytes_per_sec": 0 00:12:51.162 }, 00:12:51.162 "claimed": false, 00:12:51.162 "zoned": false, 00:12:51.162 "supported_io_types": { 00:12:51.162 "read": true, 00:12:51.162 "write": true, 00:12:51.162 "unmap": true, 00:12:51.162 "flush": false, 00:12:51.162 "reset": true, 00:12:51.162 "nvme_admin": false, 00:12:51.162 "nvme_io": false, 00:12:51.162 "nvme_io_md": false, 00:12:51.162 "write_zeroes": true, 00:12:51.162 "zcopy": false, 00:12:51.162 "get_zone_info": false, 00:12:51.162 "zone_management": false, 00:12:51.162 "zone_append": false, 00:12:51.162 "compare": false, 00:12:51.162 "compare_and_write": false, 00:12:51.162 "abort": false, 00:12:51.162 "seek_hole": true, 00:12:51.162 "seek_data": true, 00:12:51.162 "copy": false, 00:12:51.162 "nvme_iov_md": false 00:12:51.162 }, 00:12:51.162 "driver_specific": { 00:12:51.162 "lvol": { 00:12:51.162 "lvol_store_uuid": "2ca6f8a2-10af-4e4f-9512-ee13fe4a0ffc", 00:12:51.162 "base_bdev": "aio_bdev", 00:12:51.162 "thin_provision": false, 00:12:51.162 "num_allocated_clusters": 38, 00:12:51.162 "snapshot": false, 00:12:51.162 "clone": false, 00:12:51.162 "esnap_clone": false 00:12:51.162 } 00:12:51.162 } 00:12:51.162 } 00:12:51.162 ] 00:12:51.162 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:12:51.162 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:51.162 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ca6f8a2-10af-4e4f-9512-ee13fe4a0ffc 00:12:51.424 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:51.424 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:51.424 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ca6f8a2-10af-4e4f-9512-ee13fe4a0ffc 00:12:51.684 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:51.684 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 568526a7-82bb-4c04-960e-bb7c035b4452 00:12:51.944 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2ca6f8a2-10af-4e4f-9512-ee13fe4a0ffc 00:12:52.203 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:52.462 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:53.031 00:12:53.031 real 0m21.468s 00:12:53.031 user 0m44.376s 00:12:53.031 sys 0m8.088s 00:12:53.031 ************************************ 00:12:53.031 END TEST lvs_grow_dirty 00:12:53.031 ************************************ 00:12:53.031 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.031 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:53.031 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:53.031 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:12:53.031 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:12:53.031 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:12:53.031 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:53.031 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:12:53.031 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:12:53.031 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:12:53.031 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:53.031 nvmf_trace.0 00:12:53.031 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:12:53.031 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:53.031 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:53.031 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:12:53.292 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:53.292 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:12:53.292 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:53.292 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:53.292 rmmod nvme_tcp 00:12:53.292 rmmod nvme_fabrics 00:12:53.292 rmmod nvme_keyring 00:12:53.292 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:53.292 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:12:53.292 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:12:53.292 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 64028 ']' 00:12:53.292 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 64028 00:12:53.292 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 64028 ']' 00:12:53.292 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 64028 00:12:53.292 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:12:53.292 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:53.292 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64028 00:12:53.292 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:53.292 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:53.292 killing process with pid 64028 00:12:53.292 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64028' 00:12:53.292 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 64028 00:12:53.292 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 64028 00:12:53.551 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:53.551 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:53.551 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:53.551 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:12:53.551 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:12:53.551 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:53.551 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:12:53.551 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:53.551 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:53.552 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:53.552 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:53.552 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:53.810 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:53.811 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:53.811 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:53.811 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:53.811 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:53.811 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:53.811 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:53.811 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:53.811 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:53.811 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:53.811 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:53.811 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.811 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.811 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.811 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:12:53.811 00:12:53.811 real 0m43.013s 00:12:53.811 user 1m9.209s 00:12:53.811 sys 0m11.835s 00:12:53.811 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.811 ************************************ 00:12:53.811 END TEST nvmf_lvs_grow 00:12:53.811 ************************************ 00:12:53.811 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:53.811 06:07:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:53.811 06:07:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:53.811 06:07:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.811 06:07:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:54.071 ************************************ 00:12:54.071 START TEST nvmf_bdev_io_wait 00:12:54.071 ************************************ 00:12:54.071 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:54.071 * Looking for test storage... 00:12:54.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:54.071 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:54.071 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:54.071 06:07:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:12:54.071 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:54.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.072 --rc genhtml_branch_coverage=1 00:12:54.072 --rc genhtml_function_coverage=1 00:12:54.072 --rc genhtml_legend=1 00:12:54.072 --rc geninfo_all_blocks=1 00:12:54.072 --rc geninfo_unexecuted_blocks=1 00:12:54.072 00:12:54.072 ' 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:54.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.072 --rc genhtml_branch_coverage=1 00:12:54.072 --rc genhtml_function_coverage=1 00:12:54.072 --rc genhtml_legend=1 00:12:54.072 --rc geninfo_all_blocks=1 00:12:54.072 --rc geninfo_unexecuted_blocks=1 00:12:54.072 00:12:54.072 ' 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:54.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.072 --rc genhtml_branch_coverage=1 00:12:54.072 --rc genhtml_function_coverage=1 00:12:54.072 --rc genhtml_legend=1 00:12:54.072 --rc geninfo_all_blocks=1 00:12:54.072 --rc geninfo_unexecuted_blocks=1 00:12:54.072 00:12:54.072 ' 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:54.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.072 --rc genhtml_branch_coverage=1 00:12:54.072 --rc genhtml_function_coverage=1 00:12:54.072 --rc genhtml_legend=1 00:12:54.072 --rc geninfo_all_blocks=1 00:12:54.072 --rc geninfo_unexecuted_blocks=1 00:12:54.072 00:12:54.072 ' 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:54.072 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:54.072 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:54.073 Cannot find device "nvmf_init_br" 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:12:54.073 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:54.332 Cannot find device "nvmf_init_br2" 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:54.332 Cannot find device "nvmf_tgt_br" 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:54.332 Cannot find device "nvmf_tgt_br2" 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:54.332 Cannot find device "nvmf_init_br" 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:54.332 Cannot find device "nvmf_init_br2" 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:54.332 Cannot find device "nvmf_tgt_br" 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:54.332 Cannot find device "nvmf_tgt_br2" 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:54.332 Cannot find device "nvmf_br" 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:54.332 Cannot find device "nvmf_init_if" 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:54.332 Cannot find device "nvmf_init_if2" 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:54.332 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:54.332 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:54.332 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:54.590 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:54.590 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:54.590 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:54.590 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:54.590 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:54.590 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:54.590 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:54.590 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:54.590 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:54.590 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:54.590 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:54.590 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:54.590 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:54.590 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:54.590 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:54.590 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:54.590 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:54.590 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:54.590 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:12:54.590 00:12:54.590 --- 10.0.0.3 ping statistics --- 00:12:54.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.590 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:12:54.590 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:54.590 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:54.590 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:12:54.590 00:12:54.590 --- 10.0.0.4 ping statistics --- 00:12:54.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.590 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:12:54.590 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:54.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:12:54.591 00:12:54.591 --- 10.0.0.1 ping statistics --- 00:12:54.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.591 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:12:54.591 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:54.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:12:54.591 00:12:54.591 --- 10.0.0.2 ping statistics --- 00:12:54.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.591 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:12:54.591 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.591 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:12:54.591 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:54.591 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.591 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:54.591 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:54.591 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.591 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:54.591 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:54.591 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:54.591 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:54.591 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:54.591 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:54.591 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64407 00:12:54.591 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64407 00:12:54.591 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:54.591 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 64407 ']' 00:12:54.591 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.591 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.591 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.591 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.591 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:54.591 [2024-11-27 06:07:59.652515] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:12:54.591 [2024-11-27 06:07:59.653206] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.850 [2024-11-27 06:07:59.808347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.850 [2024-11-27 06:07:59.904421] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.850 [2024-11-27 06:07:59.904488] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.850 [2024-11-27 06:07:59.904513] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.850 [2024-11-27 06:07:59.904523] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.850 [2024-11-27 06:07:59.904532] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.850 [2024-11-27 06:07:59.905816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.850 [2024-11-27 06:07:59.905977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.850 [2024-11-27 06:07:59.906152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.850 [2024-11-27 06:07:59.906459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.786 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.786 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:12:55.786 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:55.786 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:55.786 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:55.786 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.786 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:55.786 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.786 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:55.786 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.786 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:55.786 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.786 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:55.786 [2024-11-27 06:08:00.877119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:56.045 [2024-11-27 06:08:00.889803] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:56.045 Malloc0 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:56.045 [2024-11-27 06:08:00.947199] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64442 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64444 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:56.045 { 00:12:56.045 "params": { 00:12:56.045 "name": "Nvme$subsystem", 00:12:56.045 "trtype": "$TEST_TRANSPORT", 00:12:56.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:56.045 "adrfam": "ipv4", 00:12:56.045 "trsvcid": "$NVMF_PORT", 00:12:56.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:56.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:56.045 "hdgst": ${hdgst:-false}, 00:12:56.045 "ddgst": ${ddgst:-false} 00:12:56.045 }, 00:12:56.045 "method": "bdev_nvme_attach_controller" 00:12:56.045 } 00:12:56.045 EOF 00:12:56.045 )") 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64446 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64448 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:56.045 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:56.045 { 00:12:56.045 "params": { 00:12:56.046 "name": "Nvme$subsystem", 00:12:56.046 "trtype": "$TEST_TRANSPORT", 00:12:56.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:56.046 "adrfam": "ipv4", 00:12:56.046 "trsvcid": "$NVMF_PORT", 00:12:56.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:56.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:56.046 "hdgst": ${hdgst:-false}, 00:12:56.046 "ddgst": ${ddgst:-false} 00:12:56.046 }, 00:12:56.046 "method": "bdev_nvme_attach_controller" 00:12:56.046 } 00:12:56.046 EOF 00:12:56.046 )") 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:56.046 { 00:12:56.046 "params": { 00:12:56.046 "name": "Nvme$subsystem", 00:12:56.046 "trtype": "$TEST_TRANSPORT", 00:12:56.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:56.046 "adrfam": "ipv4", 00:12:56.046 "trsvcid": "$NVMF_PORT", 00:12:56.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:56.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:56.046 "hdgst": ${hdgst:-false}, 00:12:56.046 "ddgst": ${ddgst:-false} 00:12:56.046 }, 00:12:56.046 "method": "bdev_nvme_attach_controller" 00:12:56.046 } 00:12:56.046 EOF 00:12:56.046 )") 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:56.046 { 00:12:56.046 "params": { 00:12:56.046 "name": "Nvme$subsystem", 00:12:56.046 "trtype": "$TEST_TRANSPORT", 00:12:56.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:56.046 "adrfam": "ipv4", 00:12:56.046 "trsvcid": "$NVMF_PORT", 00:12:56.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:56.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:56.046 "hdgst": ${hdgst:-false}, 00:12:56.046 "ddgst": ${ddgst:-false} 00:12:56.046 }, 00:12:56.046 "method": "bdev_nvme_attach_controller" 00:12:56.046 } 00:12:56.046 EOF 00:12:56.046 )") 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:56.046 "params": { 00:12:56.046 "name": "Nvme1", 00:12:56.046 "trtype": "tcp", 00:12:56.046 "traddr": "10.0.0.3", 00:12:56.046 "adrfam": "ipv4", 00:12:56.046 "trsvcid": "4420", 00:12:56.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:56.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:56.046 "hdgst": false, 00:12:56.046 "ddgst": false 00:12:56.046 }, 00:12:56.046 "method": "bdev_nvme_attach_controller" 00:12:56.046 }' 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:56.046 "params": { 00:12:56.046 "name": "Nvme1", 00:12:56.046 "trtype": "tcp", 00:12:56.046 "traddr": "10.0.0.3", 00:12:56.046 "adrfam": "ipv4", 00:12:56.046 "trsvcid": "4420", 00:12:56.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:56.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:56.046 "hdgst": false, 00:12:56.046 "ddgst": false 00:12:56.046 }, 00:12:56.046 "method": "bdev_nvme_attach_controller" 00:12:56.046 }' 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:56.046 "params": { 00:12:56.046 "name": "Nvme1", 00:12:56.046 "trtype": "tcp", 00:12:56.046 "traddr": "10.0.0.3", 00:12:56.046 "adrfam": "ipv4", 00:12:56.046 "trsvcid": "4420", 00:12:56.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:56.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:56.046 "hdgst": false, 00:12:56.046 "ddgst": false 00:12:56.046 }, 00:12:56.046 "method": "bdev_nvme_attach_controller" 00:12:56.046 }' 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:56.046 06:08:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:56.046 "params": { 00:12:56.046 "name": "Nvme1", 00:12:56.046 "trtype": "tcp", 00:12:56.046 "traddr": "10.0.0.3", 00:12:56.046 "adrfam": "ipv4", 00:12:56.046 "trsvcid": "4420", 00:12:56.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:56.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:56.046 "hdgst": false, 00:12:56.046 "ddgst": false 00:12:56.046 }, 00:12:56.046 "method": "bdev_nvme_attach_controller" 00:12:56.046 }' 00:12:56.046 [2024-11-27 06:08:01.009772] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:12:56.046 [2024-11-27 06:08:01.009874] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:56.046 [2024-11-27 06:08:01.020702] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:12:56.046 [2024-11-27 06:08:01.020941] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:56.046 06:08:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64442 00:12:56.046 [2024-11-27 06:08:01.064489] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:12:56.046 [2024-11-27 06:08:01.064601] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:56.046 [2024-11-27 06:08:01.078446] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:12:56.046 [2024-11-27 06:08:01.078913] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:56.305 [2024-11-27 06:08:01.270014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.305 [2024-11-27 06:08:01.348032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:56.305 [2024-11-27 06:08:01.348486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.305 [2024-11-27 06:08:01.362213] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:56.564 [2024-11-27 06:08:01.410680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:12:56.564 [2024-11-27 06:08:01.424608] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:56.564 [2024-11-27 06:08:01.445296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.564 [2024-11-27 06:08:01.519885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:56.564 [2024-11-27 06:08:01.534186] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:56.564 [2024-11-27 06:08:01.552907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.564 Running I/O for 1 seconds... 00:12:56.564 Running I/O for 1 seconds... 00:12:56.564 [2024-11-27 06:08:01.620142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:56.564 [2024-11-27 06:08:01.632926] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:56.877 Running I/O for 1 seconds... 00:12:56.877 Running I/O for 1 seconds... 00:12:57.841 7637.00 IOPS, 29.83 MiB/s [2024-11-27T06:08:02.938Z] 4918.00 IOPS, 19.21 MiB/s 00:12:57.841 Latency(us) 00:12:57.841 [2024-11-27T06:08:02.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.841 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:57.841 Nvme1n1 : 1.01 7687.71 30.03 0.00 0.00 16555.62 3500.22 22401.40 00:12:57.841 [2024-11-27T06:08:02.938Z] =================================================================================================================== 00:12:57.841 [2024-11-27T06:08:02.938Z] Total : 7687.71 30.03 0.00 0.00 16555.62 3500.22 22401.40 00:12:57.841 00:12:57.841 Latency(us) 00:12:57.841 [2024-11-27T06:08:02.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.841 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:57.841 Nvme1n1 : 1.02 4968.19 19.41 0.00 0.00 25554.54 11975.21 35270.28 00:12:57.841 [2024-11-27T06:08:02.938Z] =================================================================================================================== 00:12:57.841 [2024-11-27T06:08:02.938Z] Total : 4968.19 19.41 0.00 0.00 25554.54 11975.21 35270.28 00:12:57.841 162024.00 IOPS, 632.91 MiB/s 00:12:57.841 Latency(us) 00:12:57.841 [2024-11-27T06:08:02.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.841 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:57.841 Nvme1n1 : 1.00 161693.41 631.61 0.00 0.00 787.43 348.16 2025.66 00:12:57.841 [2024-11-27T06:08:02.938Z] =================================================================================================================== 00:12:57.841 [2024-11-27T06:08:02.938Z] Total : 161693.41 631.61 0.00 0.00 787.43 348.16 2025.66 00:12:57.841 6242.00 IOPS, 24.38 MiB/s 00:12:57.841 Latency(us) 00:12:57.841 [2024-11-27T06:08:02.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.841 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:57.841 Nvme1n1 : 1.01 6295.86 24.59 0.00 0.00 20200.10 10724.07 31933.91 00:12:57.841 [2024-11-27T06:08:02.938Z] =================================================================================================================== 00:12:57.841 [2024-11-27T06:08:02.938Z] Total : 6295.86 24.59 0.00 0.00 20200.10 10724.07 31933.91 00:12:57.841 06:08:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64444 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64446 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64448 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:58.101 rmmod nvme_tcp 00:12:58.101 rmmod nvme_fabrics 00:12:58.101 rmmod nvme_keyring 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64407 ']' 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64407 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 64407 ']' 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 64407 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64407 00:12:58.101 killing process with pid 64407 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64407' 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 64407 00:12:58.101 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 64407 00:12:58.360 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:58.360 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:58.360 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:58.360 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:12:58.360 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:12:58.360 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:58.360 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:12:58.360 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:58.360 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:58.360 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:58.360 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:58.360 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:58.360 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:58.360 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:58.360 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:58.360 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:58.360 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:58.360 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:58.619 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:58.619 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:58.619 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:58.619 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:58.619 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:58.619 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.619 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.619 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.619 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:12:58.619 00:12:58.619 real 0m4.693s 00:12:58.619 user 0m18.927s 00:12:58.619 sys 0m2.447s 00:12:58.619 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.619 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:58.619 ************************************ 00:12:58.619 END TEST nvmf_bdev_io_wait 00:12:58.619 ************************************ 00:12:58.619 06:08:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:58.619 06:08:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:58.619 06:08:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.619 06:08:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:58.619 ************************************ 00:12:58.619 START TEST nvmf_queue_depth 00:12:58.619 ************************************ 00:12:58.619 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:58.879 * Looking for test storage... 00:12:58.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:58.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.879 --rc genhtml_branch_coverage=1 00:12:58.879 --rc genhtml_function_coverage=1 00:12:58.879 --rc genhtml_legend=1 00:12:58.879 --rc geninfo_all_blocks=1 00:12:58.879 --rc geninfo_unexecuted_blocks=1 00:12:58.879 00:12:58.879 ' 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:58.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.879 --rc genhtml_branch_coverage=1 00:12:58.879 --rc genhtml_function_coverage=1 00:12:58.879 --rc genhtml_legend=1 00:12:58.879 --rc geninfo_all_blocks=1 00:12:58.879 --rc geninfo_unexecuted_blocks=1 00:12:58.879 00:12:58.879 ' 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:58.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.879 --rc genhtml_branch_coverage=1 00:12:58.879 --rc genhtml_function_coverage=1 00:12:58.879 --rc genhtml_legend=1 00:12:58.879 --rc geninfo_all_blocks=1 00:12:58.879 --rc geninfo_unexecuted_blocks=1 00:12:58.879 00:12:58.879 ' 00:12:58.879 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:58.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.879 --rc genhtml_branch_coverage=1 00:12:58.879 --rc genhtml_function_coverage=1 00:12:58.879 --rc genhtml_legend=1 00:12:58.879 --rc geninfo_all_blocks=1 00:12:58.879 --rc geninfo_unexecuted_blocks=1 00:12:58.879 00:12:58.879 ' 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:58.880 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:58.880 Cannot find device "nvmf_init_br" 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:58.880 Cannot find device "nvmf_init_br2" 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:58.880 Cannot find device "nvmf_tgt_br" 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:58.880 Cannot find device "nvmf_tgt_br2" 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:58.880 Cannot find device "nvmf_init_br" 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:58.880 Cannot find device "nvmf_init_br2" 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:58.880 Cannot find device "nvmf_tgt_br" 00:12:58.880 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:12:58.881 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:59.140 Cannot find device "nvmf_tgt_br2" 00:12:59.140 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:12:59.140 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:59.140 Cannot find device "nvmf_br" 00:12:59.140 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:12:59.140 06:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:59.140 Cannot find device "nvmf_init_if" 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:59.140 Cannot find device "nvmf_init_if2" 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:59.140 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:59.140 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:59.140 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:59.141 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:59.141 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:59.141 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:59.141 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:59.141 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:59.141 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:59.141 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:59.141 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:59.141 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:59.141 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:12:59.141 00:12:59.141 --- 10.0.0.3 ping statistics --- 00:12:59.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.141 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:12:59.141 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:59.141 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:59.141 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:12:59.141 00:12:59.141 --- 10.0.0.4 ping statistics --- 00:12:59.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.141 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:12:59.141 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:59.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:12:59.141 00:12:59.141 --- 10.0.0.1 ping statistics --- 00:12:59.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.141 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:12:59.141 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:59.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:12:59.141 00:12:59.141 --- 10.0.0.2 ping statistics --- 00:12:59.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.141 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:12:59.141 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.141 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:12:59.141 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:59.141 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.141 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:59.141 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:59.141 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.141 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:59.141 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:59.400 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:59.400 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:59.400 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:59.400 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:59.400 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64743 00:12:59.400 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:59.400 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64743 00:12:59.400 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64743 ']' 00:12:59.400 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.400 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.400 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.400 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.400 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:59.400 [2024-11-27 06:08:04.311468] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:12:59.400 [2024-11-27 06:08:04.311559] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.400 [2024-11-27 06:08:04.461389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.661 [2024-11-27 06:08:04.519216] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.661 [2024-11-27 06:08:04.519287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.661 [2024-11-27 06:08:04.519315] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.661 [2024-11-27 06:08:04.519339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.661 [2024-11-27 06:08:04.519363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.661 [2024-11-27 06:08:04.519781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.661 [2024-11-27 06:08:04.574610] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:59.661 [2024-11-27 06:08:04.693640] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:59.661 Malloc0 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:59.661 [2024-11-27 06:08:04.746767] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64769 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64769 /var/tmp/bdevperf.sock 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64769 ']' 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.661 06:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:59.919 [2024-11-27 06:08:04.812559] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:12:59.919 [2024-11-27 06:08:04.812677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64769 ] 00:12:59.919 [2024-11-27 06:08:04.965047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.178 [2024-11-27 06:08:05.026165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.178 [2024-11-27 06:08:05.085209] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:00.746 06:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:00.746 06:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:13:00.746 06:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:00.746 06:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.746 06:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:01.005 NVMe0n1 00:13:01.005 06:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.005 06:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:01.005 Running I/O for 10 seconds... 00:13:03.317 6679.00 IOPS, 26.09 MiB/s [2024-11-27T06:08:09.349Z] 7205.00 IOPS, 28.14 MiB/s [2024-11-27T06:08:10.283Z] 7518.67 IOPS, 29.37 MiB/s [2024-11-27T06:08:11.219Z] 7686.75 IOPS, 30.03 MiB/s [2024-11-27T06:08:12.154Z] 7970.00 IOPS, 31.13 MiB/s [2024-11-27T06:08:13.092Z] 8041.67 IOPS, 31.41 MiB/s [2024-11-27T06:08:14.029Z] 8092.14 IOPS, 31.61 MiB/s [2024-11-27T06:08:15.406Z] 8223.88 IOPS, 32.12 MiB/s [2024-11-27T06:08:16.365Z] 8235.78 IOPS, 32.17 MiB/s [2024-11-27T06:08:16.365Z] 8244.00 IOPS, 32.20 MiB/s 00:13:11.268 Latency(us) 00:13:11.268 [2024-11-27T06:08:16.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.268 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:11.268 Verification LBA range: start 0x0 length 0x4000 00:13:11.268 NVMe0n1 : 10.07 8281.06 32.35 0.00 0.00 123071.74 13643.40 93895.21 00:13:11.268 [2024-11-27T06:08:16.365Z] =================================================================================================================== 00:13:11.268 [2024-11-27T06:08:16.365Z] Total : 8281.06 32.35 0.00 0.00 123071.74 13643.40 93895.21 00:13:11.268 { 00:13:11.268 "results": [ 00:13:11.268 { 00:13:11.268 "job": "NVMe0n1", 00:13:11.268 "core_mask": "0x1", 00:13:11.268 "workload": "verify", 00:13:11.268 "status": "finished", 00:13:11.268 "verify_range": { 00:13:11.268 "start": 0, 00:13:11.268 "length": 16384 00:13:11.268 }, 00:13:11.268 "queue_depth": 1024, 00:13:11.268 "io_size": 4096, 00:13:11.268 "runtime": 10.068397, 00:13:11.268 "iops": 8281.060033687587, 00:13:11.268 "mibps": 32.347890756592136, 00:13:11.268 "io_failed": 0, 00:13:11.268 "io_timeout": 0, 00:13:11.268 "avg_latency_us": 123071.74210668518, 00:13:11.268 "min_latency_us": 13643.403636363637, 00:13:11.268 "max_latency_us": 93895.21454545454 00:13:11.268 } 00:13:11.268 ], 00:13:11.268 "core_count": 1 00:13:11.268 } 00:13:11.268 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64769 00:13:11.268 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64769 ']' 00:13:11.268 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64769 00:13:11.268 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:13:11.268 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.268 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64769 00:13:11.268 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:11.268 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:11.268 killing process with pid 64769 00:13:11.268 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64769' 00:13:11.268 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64769 00:13:11.268 Received shutdown signal, test time was about 10.000000 seconds 00:13:11.268 00:13:11.268 Latency(us) 00:13:11.268 [2024-11-27T06:08:16.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.268 [2024-11-27T06:08:16.365Z] =================================================================================================================== 00:13:11.268 [2024-11-27T06:08:16.365Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:11.268 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64769 00:13:11.268 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:11.268 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:11.268 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:11.268 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:13:11.527 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:11.527 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:13:11.527 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:11.527 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:11.527 rmmod nvme_tcp 00:13:11.527 rmmod nvme_fabrics 00:13:11.527 rmmod nvme_keyring 00:13:11.527 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:11.527 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:13:11.527 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:13:11.527 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64743 ']' 00:13:11.527 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64743 00:13:11.527 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64743 ']' 00:13:11.527 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64743 00:13:11.527 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:13:11.527 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.527 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64743 00:13:11.527 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:11.527 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:11.527 killing process with pid 64743 00:13:11.527 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64743' 00:13:11.527 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64743 00:13:11.527 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64743 00:13:11.786 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:11.786 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:11.786 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:11.786 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:13:11.787 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:13:11.787 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:13:11.787 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:11.787 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:11.787 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:11.787 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:11.787 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:11.787 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:11.787 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:11.787 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:11.787 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:11.787 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:11.787 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:11.787 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:12.045 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:12.046 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:12.046 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:12.046 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:12.046 06:08:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:12.046 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.046 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:12.046 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.046 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:13:12.046 00:13:12.046 real 0m13.381s 00:13:12.046 user 0m22.928s 00:13:12.046 sys 0m2.378s 00:13:12.046 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.046 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:12.046 ************************************ 00:13:12.046 END TEST nvmf_queue_depth 00:13:12.046 ************************************ 00:13:12.046 06:08:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:12.046 06:08:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:12.046 06:08:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.046 06:08:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:12.046 ************************************ 00:13:12.046 START TEST nvmf_target_multipath 00:13:12.046 ************************************ 00:13:12.046 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:12.306 * Looking for test storage... 00:13:12.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:12.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.306 --rc genhtml_branch_coverage=1 00:13:12.306 --rc genhtml_function_coverage=1 00:13:12.306 --rc genhtml_legend=1 00:13:12.306 --rc geninfo_all_blocks=1 00:13:12.306 --rc geninfo_unexecuted_blocks=1 00:13:12.306 00:13:12.306 ' 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:12.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.306 --rc genhtml_branch_coverage=1 00:13:12.306 --rc genhtml_function_coverage=1 00:13:12.306 --rc genhtml_legend=1 00:13:12.306 --rc geninfo_all_blocks=1 00:13:12.306 --rc geninfo_unexecuted_blocks=1 00:13:12.306 00:13:12.306 ' 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:12.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.306 --rc genhtml_branch_coverage=1 00:13:12.306 --rc genhtml_function_coverage=1 00:13:12.306 --rc genhtml_legend=1 00:13:12.306 --rc geninfo_all_blocks=1 00:13:12.306 --rc geninfo_unexecuted_blocks=1 00:13:12.306 00:13:12.306 ' 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:12.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.306 --rc genhtml_branch_coverage=1 00:13:12.306 --rc genhtml_function_coverage=1 00:13:12.306 --rc genhtml_legend=1 00:13:12.306 --rc geninfo_all_blocks=1 00:13:12.306 --rc geninfo_unexecuted_blocks=1 00:13:12.306 00:13:12.306 ' 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:12.306 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:12.307 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:12.307 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:12.308 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:12.308 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:12.308 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:12.308 Cannot find device "nvmf_init_br" 00:13:12.308 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:13:12.308 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:12.308 Cannot find device "nvmf_init_br2" 00:13:12.308 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:13:12.308 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:12.308 Cannot find device "nvmf_tgt_br" 00:13:12.308 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:13:12.308 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:12.308 Cannot find device "nvmf_tgt_br2" 00:13:12.308 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:13:12.308 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:12.308 Cannot find device "nvmf_init_br" 00:13:12.308 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:13:12.308 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:12.308 Cannot find device "nvmf_init_br2" 00:13:12.308 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:13:12.308 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:12.308 Cannot find device "nvmf_tgt_br" 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:12.568 Cannot find device "nvmf_tgt_br2" 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:12.568 Cannot find device "nvmf_br" 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:12.568 Cannot find device "nvmf_init_if" 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:12.568 Cannot find device "nvmf_init_if2" 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:12.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:12.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:12.568 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:12.568 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:13:12.568 00:13:12.568 --- 10.0.0.3 ping statistics --- 00:13:12.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.568 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:12.568 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:12.568 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:13:12.568 00:13:12.568 --- 10.0.0.4 ping statistics --- 00:13:12.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.568 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:12.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:13:12.568 00:13:12.568 --- 10.0.0.1 ping statistics --- 00:13:12.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.568 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:12.568 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:12.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:13:12.568 00:13:12.568 --- 10.0.0.2 ping statistics --- 00:13:12.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.568 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:13:12.828 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.828 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:13:12.828 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:12.828 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.828 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:12.828 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:12.828 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.828 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:12.828 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:12.828 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:13:12.828 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:13:12.828 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:13:12.828 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:12.828 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:12.828 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:12.828 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=65142 00:13:12.828 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:12.828 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 65142 00:13:12.828 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 65142 ']' 00:13:12.828 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.828 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.828 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.828 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.828 06:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:12.828 [2024-11-27 06:08:17.752096] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:13:12.828 [2024-11-27 06:08:17.752230] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.828 [2024-11-27 06:08:17.907260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:13.087 [2024-11-27 06:08:17.971342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.087 [2024-11-27 06:08:17.971413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.087 [2024-11-27 06:08:17.971438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:13.087 [2024-11-27 06:08:17.971448] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:13.087 [2024-11-27 06:08:17.971457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.087 [2024-11-27 06:08:17.972753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.087 [2024-11-27 06:08:17.972876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.087 [2024-11-27 06:08:17.973005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:13.087 [2024-11-27 06:08:17.973015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.087 [2024-11-27 06:08:18.033701] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:14.023 06:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:14.023 06:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:13:14.023 06:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:14.023 06:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:14.023 06:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:14.023 06:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.023 06:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:14.279 [2024-11-27 06:08:19.144464] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.279 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:13:14.538 Malloc0 00:13:14.538 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:13:14.805 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:15.077 06:08:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:15.335 [2024-11-27 06:08:20.328142] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:15.335 06:08:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:13:15.593 [2024-11-27 06:08:20.596550] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:13:15.593 06:08:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid=34bde053-797d-42f4-ad97-2a3b315837d0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:13:15.851 06:08:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid=34bde053-797d-42f4-ad97-2a3b315837d0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:13:15.851 06:08:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:13:15.851 06:08:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:13:15.851 06:08:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:15.851 06:08:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:15.851 06:08:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=65237 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:13:18.380 06:08:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:13:18.380 [global] 00:13:18.380 thread=1 00:13:18.380 invalidate=1 00:13:18.380 rw=randrw 00:13:18.380 time_based=1 00:13:18.380 runtime=6 00:13:18.380 ioengine=libaio 00:13:18.380 direct=1 00:13:18.380 bs=4096 00:13:18.380 iodepth=128 00:13:18.380 norandommap=0 00:13:18.380 numjobs=1 00:13:18.380 00:13:18.380 verify_dump=1 00:13:18.380 verify_backlog=512 00:13:18.380 verify_state_save=0 00:13:18.380 do_verify=1 00:13:18.380 verify=crc32c-intel 00:13:18.380 [job0] 00:13:18.380 filename=/dev/nvme0n1 00:13:18.380 Could not set queue depth (nvme0n1) 00:13:18.380 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:18.380 fio-3.35 00:13:18.380 Starting 1 thread 00:13:18.947 06:08:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:13:19.205 06:08:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:13:19.463 06:08:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:13:19.463 06:08:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:13:19.463 06:08:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:19.463 06:08:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:19.463 06:08:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:19.463 06:08:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:19.463 06:08:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:13:19.463 06:08:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:13:19.463 06:08:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:19.463 06:08:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:19.463 06:08:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:19.463 06:08:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:19.463 06:08:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:13:20.029 06:08:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:13:20.029 06:08:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:13:20.029 06:08:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:13:20.029 06:08:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:20.029 06:08:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:20.029 06:08:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:20.029 06:08:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:20.029 06:08:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:13:20.029 06:08:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:13:20.029 06:08:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:20.029 06:08:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:20.029 06:08:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:20.029 06:08:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:20.029 06:08:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 65237 00:13:24.210 00:13:24.210 job0: (groupid=0, jobs=1): err= 0: pid=65259: Wed Nov 27 06:08:29 2024 00:13:24.210 read: IOPS=9313, BW=36.4MiB/s (38.1MB/s)(219MiB/6007msec) 00:13:24.210 slat (usec): min=7, max=9901, avg=64.42, stdev=253.09 00:13:24.210 clat (usec): min=2021, max=21095, avg=9415.31, stdev=1551.78 00:13:24.210 lat (usec): min=2050, max=21160, avg=9479.72, stdev=1556.92 00:13:24.210 clat percentiles (usec): 00:13:24.210 | 1.00th=[ 5145], 5.00th=[ 7504], 10.00th=[ 8094], 20.00th=[ 8586], 00:13:24.210 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9503], 00:13:24.210 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10683], 95.00th=[12780], 00:13:24.210 | 99.00th=[14877], 99.50th=[15401], 99.90th=[17433], 99.95th=[19268], 00:13:24.210 | 99.99th=[20055] 00:13:24.210 bw ( KiB/s): min= 5032, max=26672, per=51.94%, avg=19351.18, stdev=6788.43, samples=11 00:13:24.210 iops : min= 1258, max= 6668, avg=4837.73, stdev=1697.10, samples=11 00:13:24.210 write: IOPS=5708, BW=22.3MiB/s (23.4MB/s)(111MiB/4970msec); 0 zone resets 00:13:24.210 slat (usec): min=14, max=2151, avg=73.72, stdev=187.45 00:13:24.210 clat (usec): min=2025, max=20091, avg=8290.52, stdev=1453.54 00:13:24.210 lat (usec): min=2099, max=20134, avg=8364.24, stdev=1458.01 00:13:24.210 clat percentiles (usec): 00:13:24.210 | 1.00th=[ 3884], 5.00th=[ 5276], 10.00th=[ 6980], 20.00th=[ 7635], 00:13:24.210 | 30.00th=[ 7963], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8586], 00:13:24.210 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9372], 95.00th=[ 9765], 00:13:24.210 | 99.00th=[12911], 99.50th=[14484], 99.90th=[19006], 99.95th=[19530], 00:13:24.210 | 99.99th=[20055] 00:13:24.210 bw ( KiB/s): min= 5408, max=26488, per=84.82%, avg=19367.18, stdev=6561.53, samples=11 00:13:24.210 iops : min= 1352, max= 6622, avg=4841.73, stdev=1640.37, samples=11 00:13:24.210 lat (msec) : 4=0.59%, 10=84.01%, 20=15.39%, 50=0.02% 00:13:24.210 cpu : usr=5.81%, sys=19.43%, ctx=5006, majf=0, minf=139 00:13:24.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:13:24.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:24.210 issued rwts: total=55945,28369,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.210 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:24.210 00:13:24.210 Run status group 0 (all jobs): 00:13:24.210 READ: bw=36.4MiB/s (38.1MB/s), 36.4MiB/s-36.4MiB/s (38.1MB/s-38.1MB/s), io=219MiB (229MB), run=6007-6007msec 00:13:24.210 WRITE: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=111MiB (116MB), run=4970-4970msec 00:13:24.210 00:13:24.210 Disk stats (read/write): 00:13:24.210 nvme0n1: ios=55126/27816, merge=0/0, ticks=498053/216962, in_queue=715015, util=98.60% 00:13:24.210 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:13:24.774 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:13:25.032 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:13:25.032 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:13:25.033 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:25.033 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:25.033 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:25.033 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:25.033 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:13:25.033 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:13:25.033 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:25.033 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:25.033 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:25.033 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:25.033 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:13:25.033 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65339 00:13:25.033 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:13:25.033 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:13:25.033 [global] 00:13:25.033 thread=1 00:13:25.033 invalidate=1 00:13:25.033 rw=randrw 00:13:25.033 time_based=1 00:13:25.033 runtime=6 00:13:25.033 ioengine=libaio 00:13:25.033 direct=1 00:13:25.033 bs=4096 00:13:25.033 iodepth=128 00:13:25.033 norandommap=0 00:13:25.033 numjobs=1 00:13:25.033 00:13:25.033 verify_dump=1 00:13:25.033 verify_backlog=512 00:13:25.033 verify_state_save=0 00:13:25.033 do_verify=1 00:13:25.033 verify=crc32c-intel 00:13:25.033 [job0] 00:13:25.033 filename=/dev/nvme0n1 00:13:25.033 Could not set queue depth (nvme0n1) 00:13:25.033 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:25.033 fio-3.35 00:13:25.033 Starting 1 thread 00:13:25.968 06:08:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:13:26.226 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:13:26.484 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:13:26.484 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:13:26.484 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:26.484 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:26.484 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:26.484 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:26.484 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:13:26.484 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:13:26.484 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:26.484 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:26.484 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:26.484 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:26.484 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:13:26.742 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:13:27.000 06:08:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:13:27.000 06:08:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:13:27.000 06:08:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:27.000 06:08:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:27.000 06:08:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:27.000 06:08:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:27.000 06:08:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:13:27.000 06:08:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:13:27.000 06:08:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:27.000 06:08:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:27.000 06:08:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:27.000 06:08:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:27.000 06:08:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65339 00:13:31.186 00:13:31.186 job0: (groupid=0, jobs=1): err= 0: pid=65360: Wed Nov 27 06:08:36 2024 00:13:31.186 read: IOPS=10.5k, BW=41.1MiB/s (43.1MB/s)(247MiB/6007msec) 00:13:31.186 slat (usec): min=5, max=8206, avg=47.04, stdev=211.31 00:13:31.186 clat (usec): min=349, max=17384, avg=8304.86, stdev=2354.69 00:13:31.186 lat (usec): min=370, max=17393, avg=8351.90, stdev=2370.62 00:13:31.186 clat percentiles (usec): 00:13:31.186 | 1.00th=[ 2474], 5.00th=[ 3687], 10.00th=[ 4752], 20.00th=[ 6521], 00:13:31.186 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8979], 00:13:31.186 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[10290], 95.00th=[12387], 00:13:31.186 | 99.00th=[14222], 99.50th=[14746], 99.90th=[15401], 99.95th=[16057], 00:13:31.186 | 99.99th=[17171] 00:13:31.186 bw ( KiB/s): min= 9832, max=37192, per=53.63%, avg=22560.09, stdev=8516.29, samples=11 00:13:31.186 iops : min= 2458, max= 9298, avg=5640.00, stdev=2129.03, samples=11 00:13:31.186 write: IOPS=6294, BW=24.6MiB/s (25.8MB/s)(132MiB/5360msec); 0 zone resets 00:13:31.186 slat (usec): min=11, max=2833, avg=57.81, stdev=152.10 00:13:31.186 clat (usec): min=721, max=17208, avg=7059.02, stdev=2037.27 00:13:31.186 lat (usec): min=744, max=17231, avg=7116.83, stdev=2054.24 00:13:31.186 clat percentiles (usec): 00:13:31.186 | 1.00th=[ 2540], 5.00th=[ 3326], 10.00th=[ 3916], 20.00th=[ 4883], 00:13:31.186 | 30.00th=[ 6128], 40.00th=[ 7242], 50.00th=[ 7767], 60.00th=[ 8029], 00:13:31.186 | 70.00th=[ 8356], 80.00th=[ 8586], 90.00th=[ 8979], 95.00th=[ 9372], 00:13:31.186 | 99.00th=[11600], 99.50th=[12649], 99.90th=[14484], 99.95th=[15008], 00:13:31.186 | 99.99th=[16319] 00:13:31.186 bw ( KiB/s): min=10168, max=36568, per=89.64%, avg=22571.36, stdev=8396.96, samples=11 00:13:31.186 iops : min= 2542, max= 9142, avg=5642.73, stdev=2099.13, samples=11 00:13:31.186 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.06% 00:13:31.186 lat (msec) : 2=0.36%, 4=7.47%, 10=82.02%, 20=10.04% 00:13:31.186 cpu : usr=5.41%, sys=21.98%, ctx=5708, majf=0, minf=102 00:13:31.186 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:13:31.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:31.186 issued rwts: total=63167,33739,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:31.186 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:31.186 00:13:31.186 Run status group 0 (all jobs): 00:13:31.186 READ: bw=41.1MiB/s (43.1MB/s), 41.1MiB/s-41.1MiB/s (43.1MB/s-43.1MB/s), io=247MiB (259MB), run=6007-6007msec 00:13:31.186 WRITE: bw=24.6MiB/s (25.8MB/s), 24.6MiB/s-24.6MiB/s (25.8MB/s-25.8MB/s), io=132MiB (138MB), run=5360-5360msec 00:13:31.186 00:13:31.186 Disk stats (read/write): 00:13:31.186 nvme0n1: ios=62294/33177, merge=0/0, ticks=496537/220201, in_queue=716738, util=98.70% 00:13:31.186 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:31.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:31.445 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:31.445 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:13:31.445 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:31.445 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.445 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:31.445 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.445 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:13:31.445 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:31.703 rmmod nvme_tcp 00:13:31.703 rmmod nvme_fabrics 00:13:31.703 rmmod nvme_keyring 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 65142 ']' 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 65142 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 65142 ']' 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 65142 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65142 00:13:31.703 killing process with pid 65142 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65142' 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 65142 00:13:31.703 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 65142 00:13:31.961 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:31.962 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:31.962 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:31.962 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:13:31.962 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:13:31.962 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:31.962 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:13:31.962 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:31.962 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:31.962 06:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:31.962 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:31.962 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:31.962 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:31.962 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:32.219 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:32.220 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:32.220 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:32.220 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:32.220 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:32.220 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:32.220 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:32.220 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:32.220 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:32.220 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.220 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.220 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.220 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:13:32.220 ************************************ 00:13:32.220 END TEST nvmf_target_multipath 00:13:32.220 ************************************ 00:13:32.220 00:13:32.220 real 0m20.147s 00:13:32.220 user 1m16.051s 00:13:32.220 sys 0m8.734s 00:13:32.220 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.220 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:32.220 06:08:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:32.220 06:08:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:32.220 06:08:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.220 06:08:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:32.220 ************************************ 00:13:32.220 START TEST nvmf_zcopy 00:13:32.220 ************************************ 00:13:32.220 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:32.478 * Looking for test storage... 00:13:32.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:32.478 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:32.478 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:13:32.478 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:32.478 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:32.478 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:32.478 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:32.478 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:32.478 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:13:32.478 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:13:32.478 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:13:32.478 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:13:32.478 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:13:32.478 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:13:32.478 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:32.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.479 --rc genhtml_branch_coverage=1 00:13:32.479 --rc genhtml_function_coverage=1 00:13:32.479 --rc genhtml_legend=1 00:13:32.479 --rc geninfo_all_blocks=1 00:13:32.479 --rc geninfo_unexecuted_blocks=1 00:13:32.479 00:13:32.479 ' 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:32.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.479 --rc genhtml_branch_coverage=1 00:13:32.479 --rc genhtml_function_coverage=1 00:13:32.479 --rc genhtml_legend=1 00:13:32.479 --rc geninfo_all_blocks=1 00:13:32.479 --rc geninfo_unexecuted_blocks=1 00:13:32.479 00:13:32.479 ' 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:32.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.479 --rc genhtml_branch_coverage=1 00:13:32.479 --rc genhtml_function_coverage=1 00:13:32.479 --rc genhtml_legend=1 00:13:32.479 --rc geninfo_all_blocks=1 00:13:32.479 --rc geninfo_unexecuted_blocks=1 00:13:32.479 00:13:32.479 ' 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:32.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.479 --rc genhtml_branch_coverage=1 00:13:32.479 --rc genhtml_function_coverage=1 00:13:32.479 --rc genhtml_legend=1 00:13:32.479 --rc geninfo_all_blocks=1 00:13:32.479 --rc geninfo_unexecuted_blocks=1 00:13:32.479 00:13:32.479 ' 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.479 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:32.480 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:32.480 Cannot find device "nvmf_init_br" 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:32.480 Cannot find device "nvmf_init_br2" 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:32.480 Cannot find device "nvmf_tgt_br" 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:32.480 Cannot find device "nvmf_tgt_br2" 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:13:32.480 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:32.738 Cannot find device "nvmf_init_br" 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:32.738 Cannot find device "nvmf_init_br2" 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:32.738 Cannot find device "nvmf_tgt_br" 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:32.738 Cannot find device "nvmf_tgt_br2" 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:32.738 Cannot find device "nvmf_br" 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:32.738 Cannot find device "nvmf_init_if" 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:32.738 Cannot find device "nvmf_init_if2" 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:32.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:32.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:32.738 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:32.997 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:32.997 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:13:32.997 00:13:32.997 --- 10.0.0.3 ping statistics --- 00:13:32.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.997 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:32.997 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:32.997 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:13:32.997 00:13:32.997 --- 10.0.0.4 ping statistics --- 00:13:32.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.997 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:32.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:13:32.997 00:13:32.997 --- 10.0.0.1 ping statistics --- 00:13:32.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.997 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:32.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:13:32.997 00:13:32.997 --- 10.0.0.2 ping statistics --- 00:13:32.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.997 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65668 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65668 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65668 ']' 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:32.997 06:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:32.997 [2024-11-27 06:08:37.971396] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:13:32.997 [2024-11-27 06:08:37.971697] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.256 [2024-11-27 06:08:38.126266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.256 [2024-11-27 06:08:38.189511] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.256 [2024-11-27 06:08:38.189572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.256 [2024-11-27 06:08:38.189588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.256 [2024-11-27 06:08:38.189598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.256 [2024-11-27 06:08:38.189607] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.256 [2024-11-27 06:08:38.190106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.256 [2024-11-27 06:08:38.249404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:34.197 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.197 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:13:34.197 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:34.197 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:34.197 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:34.197 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.197 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:34.197 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:34.198 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.198 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:34.198 [2024-11-27 06:08:38.983481] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:34.198 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.198 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:34.198 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.198 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:34.198 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.198 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:34.198 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.198 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:34.198 [2024-11-27 06:08:38.999631] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:34.198 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.198 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:34.198 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.198 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:34.198 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.198 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:34.198 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.198 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:34.198 malloc0 00:13:34.198 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.198 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:34.198 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.198 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:34.198 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.198 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:34.198 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:34.198 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:13:34.198 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:13:34.198 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:34.198 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:34.198 { 00:13:34.198 "params": { 00:13:34.198 "name": "Nvme$subsystem", 00:13:34.198 "trtype": "$TEST_TRANSPORT", 00:13:34.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:34.198 "adrfam": "ipv4", 00:13:34.198 "trsvcid": "$NVMF_PORT", 00:13:34.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:34.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:34.198 "hdgst": ${hdgst:-false}, 00:13:34.198 "ddgst": ${ddgst:-false} 00:13:34.198 }, 00:13:34.198 "method": "bdev_nvme_attach_controller" 00:13:34.198 } 00:13:34.198 EOF 00:13:34.198 )") 00:13:34.198 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:13:34.198 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:13:34.198 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:13:34.198 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:34.198 "params": { 00:13:34.198 "name": "Nvme1", 00:13:34.198 "trtype": "tcp", 00:13:34.198 "traddr": "10.0.0.3", 00:13:34.198 "adrfam": "ipv4", 00:13:34.198 "trsvcid": "4420", 00:13:34.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:34.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:34.198 "hdgst": false, 00:13:34.198 "ddgst": false 00:13:34.198 }, 00:13:34.198 "method": "bdev_nvme_attach_controller" 00:13:34.198 }' 00:13:34.198 [2024-11-27 06:08:39.118396] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:13:34.198 [2024-11-27 06:08:39.118587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65701 ] 00:13:34.198 [2024-11-27 06:08:39.271719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.460 [2024-11-27 06:08:39.329238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.460 [2024-11-27 06:08:39.395640] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:34.460 Running I/O for 10 seconds... 00:13:36.772 5313.00 IOPS, 41.51 MiB/s [2024-11-27T06:08:42.863Z] 5553.00 IOPS, 43.38 MiB/s [2024-11-27T06:08:43.796Z] 5616.67 IOPS, 43.88 MiB/s [2024-11-27T06:08:44.730Z] 5665.75 IOPS, 44.26 MiB/s [2024-11-27T06:08:45.665Z] 5694.60 IOPS, 44.49 MiB/s [2024-11-27T06:08:46.600Z] 5811.33 IOPS, 45.40 MiB/s [2024-11-27T06:08:47.535Z] 5938.71 IOPS, 46.40 MiB/s [2024-11-27T06:08:48.910Z] 6026.12 IOPS, 47.08 MiB/s [2024-11-27T06:08:49.844Z] 6070.56 IOPS, 47.43 MiB/s [2024-11-27T06:08:49.844Z] 6019.40 IOPS, 47.03 MiB/s 00:13:44.747 Latency(us) 00:13:44.747 [2024-11-27T06:08:49.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.747 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:44.747 Verification LBA range: start 0x0 length 0x1000 00:13:44.747 Nvme1n1 : 10.01 6021.31 47.04 0.00 0.00 21191.52 3053.38 30742.34 00:13:44.747 [2024-11-27T06:08:49.844Z] =================================================================================================================== 00:13:44.747 [2024-11-27T06:08:49.844Z] Total : 6021.31 47.04 0.00 0.00 21191.52 3053.38 30742.34 00:13:44.747 06:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65824 00:13:44.747 06:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:44.747 06:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:44.747 06:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:44.747 06:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:44.747 06:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:13:44.747 06:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:13:44.747 06:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:44.747 06:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:44.747 { 00:13:44.747 "params": { 00:13:44.747 "name": "Nvme$subsystem", 00:13:44.747 "trtype": "$TEST_TRANSPORT", 00:13:44.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:44.747 "adrfam": "ipv4", 00:13:44.747 "trsvcid": "$NVMF_PORT", 00:13:44.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:44.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:44.747 "hdgst": ${hdgst:-false}, 00:13:44.747 "ddgst": ${ddgst:-false} 00:13:44.747 }, 00:13:44.747 "method": "bdev_nvme_attach_controller" 00:13:44.747 } 00:13:44.747 EOF 00:13:44.747 )") 00:13:44.747 [2024-11-27 06:08:49.759262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.747 [2024-11-27 06:08:49.759311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.747 06:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:13:44.747 06:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:13:44.747 06:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:13:44.747 06:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:44.747 "params": { 00:13:44.747 "name": "Nvme1", 00:13:44.747 "trtype": "tcp", 00:13:44.747 "traddr": "10.0.0.3", 00:13:44.747 "adrfam": "ipv4", 00:13:44.747 "trsvcid": "4420", 00:13:44.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:44.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:44.747 "hdgst": false, 00:13:44.747 "ddgst": false 00:13:44.747 }, 00:13:44.747 "method": "bdev_nvme_attach_controller" 00:13:44.747 }' 00:13:44.747 [2024-11-27 06:08:49.771199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.747 [2024-11-27 06:08:49.771267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.747 [2024-11-27 06:08:49.783199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.747 [2024-11-27 06:08:49.783434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.747 [2024-11-27 06:08:49.795221] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.747 [2024-11-27 06:08:49.795327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.747 [2024-11-27 06:08:49.807204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.747 [2024-11-27 06:08:49.807380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.747 [2024-11-27 06:08:49.816044] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:13:44.747 [2024-11-27 06:08:49.816454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65824 ] 00:13:44.747 [2024-11-27 06:08:49.819211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.747 [2024-11-27 06:08:49.819251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.747 [2024-11-27 06:08:49.831210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.747 [2024-11-27 06:08:49.831240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.006 [2024-11-27 06:08:49.843224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.006 [2024-11-27 06:08:49.843382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.006 [2024-11-27 06:08:49.855227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.006 [2024-11-27 06:08:49.855282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.006 [2024-11-27 06:08:49.867234] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.006 [2024-11-27 06:08:49.867437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.006 [2024-11-27 06:08:49.879224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.006 [2024-11-27 06:08:49.879448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.006 [2024-11-27 06:08:49.891220] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.006 [2024-11-27 06:08:49.891407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.006 [2024-11-27 06:08:49.903233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.006 [2024-11-27 06:08:49.903418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.006 [2024-11-27 06:08:49.915223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.007 [2024-11-27 06:08:49.915409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.007 [2024-11-27 06:08:49.927254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.007 [2024-11-27 06:08:49.927286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.007 [2024-11-27 06:08:49.939237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.007 [2024-11-27 06:08:49.939263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.007 [2024-11-27 06:08:49.951238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.007 [2024-11-27 06:08:49.951264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.007 [2024-11-27 06:08:49.963243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.007 [2024-11-27 06:08:49.963270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.007 [2024-11-27 06:08:49.973167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.007 [2024-11-27 06:08:49.975285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.007 [2024-11-27 06:08:49.975318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.007 [2024-11-27 06:08:49.987287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.007 [2024-11-27 06:08:49.987331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.007 [2024-11-27 06:08:49.999270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.007 [2024-11-27 06:08:49.999301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.007 [2024-11-27 06:08:50.011274] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.007 [2024-11-27 06:08:50.011305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.007 [2024-11-27 06:08:50.023292] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.007 [2024-11-27 06:08:50.023327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.007 [2024-11-27 06:08:50.035190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.007 [2024-11-27 06:08:50.035279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.007 [2024-11-27 06:08:50.035307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.007 [2024-11-27 06:08:50.047310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.007 [2024-11-27 06:08:50.047343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.007 [2024-11-27 06:08:50.059323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.007 [2024-11-27 06:08:50.059364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.007 [2024-11-27 06:08:50.071335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.007 [2024-11-27 06:08:50.071374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.007 [2024-11-27 06:08:50.083347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.007 [2024-11-27 06:08:50.083395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.007 [2024-11-27 06:08:50.095337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.007 [2024-11-27 06:08:50.095375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.007 [2024-11-27 06:08:50.101461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:45.266 [2024-11-27 06:08:50.107330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.266 [2024-11-27 06:08:50.107359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.266 [2024-11-27 06:08:50.119337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.266 [2024-11-27 06:08:50.119370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.266 [2024-11-27 06:08:50.131335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.266 [2024-11-27 06:08:50.131383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.266 [2024-11-27 06:08:50.143346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.266 [2024-11-27 06:08:50.143374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.266 [2024-11-27 06:08:50.155356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.266 [2024-11-27 06:08:50.155384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.266 [2024-11-27 06:08:50.167417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.266 [2024-11-27 06:08:50.167454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.266 [2024-11-27 06:08:50.179405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.266 [2024-11-27 06:08:50.179438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.266 [2024-11-27 06:08:50.191421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.266 [2024-11-27 06:08:50.191457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.266 [2024-11-27 06:08:50.203417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.266 [2024-11-27 06:08:50.203450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.266 [2024-11-27 06:08:50.215453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.266 [2024-11-27 06:08:50.215483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.266 [2024-11-27 06:08:50.227447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.266 [2024-11-27 06:08:50.227482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.266 Running I/O for 5 seconds... 00:13:45.266 [2024-11-27 06:08:50.245828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.266 [2024-11-27 06:08:50.245871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.266 [2024-11-27 06:08:50.260813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.266 [2024-11-27 06:08:50.260852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.266 [2024-11-27 06:08:50.270974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.266 [2024-11-27 06:08:50.271180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.266 [2024-11-27 06:08:50.286615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.266 [2024-11-27 06:08:50.286772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.266 [2024-11-27 06:08:50.302038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.266 [2024-11-27 06:08:50.302274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.266 [2024-11-27 06:08:50.318847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.266 [2024-11-27 06:08:50.318886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.266 [2024-11-27 06:08:50.335501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.266 [2024-11-27 06:08:50.335536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.266 [2024-11-27 06:08:50.351906] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.266 [2024-11-27 06:08:50.351945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.525 [2024-11-27 06:08:50.368686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.525 [2024-11-27 06:08:50.368839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.525 [2024-11-27 06:08:50.386028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.525 [2024-11-27 06:08:50.386067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.525 [2024-11-27 06:08:50.402297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.525 [2024-11-27 06:08:50.402333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.525 [2024-11-27 06:08:50.420208] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.525 [2024-11-27 06:08:50.420254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.525 [2024-11-27 06:08:50.434842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.525 [2024-11-27 06:08:50.434879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.525 [2024-11-27 06:08:50.450457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.525 [2024-11-27 06:08:50.450493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.525 [2024-11-27 06:08:50.468796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.525 [2024-11-27 06:08:50.468834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.525 [2024-11-27 06:08:50.483915] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.525 [2024-11-27 06:08:50.484120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.525 [2024-11-27 06:08:50.494202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.525 [2024-11-27 06:08:50.494237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.525 [2024-11-27 06:08:50.509088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.525 [2024-11-27 06:08:50.509123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.525 [2024-11-27 06:08:50.526133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.525 [2024-11-27 06:08:50.526342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.525 [2024-11-27 06:08:50.541390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.525 [2024-11-27 06:08:50.541440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.525 [2024-11-27 06:08:50.556472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.525 [2024-11-27 06:08:50.556660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.525 [2024-11-27 06:08:50.572624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.525 [2024-11-27 06:08:50.572662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.525 [2024-11-27 06:08:50.589885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.525 [2024-11-27 06:08:50.589922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.525 [2024-11-27 06:08:50.606447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.525 [2024-11-27 06:08:50.606646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.844 [2024-11-27 06:08:50.623445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.844 [2024-11-27 06:08:50.623496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.844 [2024-11-27 06:08:50.639923] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.844 [2024-11-27 06:08:50.640019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.844 [2024-11-27 06:08:50.657009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.844 [2024-11-27 06:08:50.657224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.844 [2024-11-27 06:08:50.673318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.844 [2024-11-27 06:08:50.673354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.844 [2024-11-27 06:08:50.690293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.844 [2024-11-27 06:08:50.690328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.844 [2024-11-27 06:08:50.706865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.844 [2024-11-27 06:08:50.706902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.844 [2024-11-27 06:08:50.725322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.844 [2024-11-27 06:08:50.725357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.844 [2024-11-27 06:08:50.740072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.844 [2024-11-27 06:08:50.740108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.844 [2024-11-27 06:08:50.749759] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.844 [2024-11-27 06:08:50.749797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.844 [2024-11-27 06:08:50.765882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.844 [2024-11-27 06:08:50.765919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.844 [2024-11-27 06:08:50.782713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.844 [2024-11-27 06:08:50.782751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.844 [2024-11-27 06:08:50.797903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.844 [2024-11-27 06:08:50.797940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.844 [2024-11-27 06:08:50.814321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.844 [2024-11-27 06:08:50.814357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.844 [2024-11-27 06:08:50.831367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.844 [2024-11-27 06:08:50.831400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.844 [2024-11-27 06:08:50.847013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.844 [2024-11-27 06:08:50.847202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.844 [2024-11-27 06:08:50.865328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.844 [2024-11-27 06:08:50.865367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.844 [2024-11-27 06:08:50.880270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.844 [2024-11-27 06:08:50.880309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.844 [2024-11-27 06:08:50.889335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.844 [2024-11-27 06:08:50.889372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.844 [2024-11-27 06:08:50.905648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.844 [2024-11-27 06:08:50.905693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:45.844 [2024-11-27 06:08:50.914707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:45.844 [2024-11-27 06:08:50.914865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.102 [2024-11-27 06:08:50.931322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.103 [2024-11-27 06:08:50.931356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.103 [2024-11-27 06:08:50.954176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.103 [2024-11-27 06:08:50.954246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.103 [2024-11-27 06:08:50.971010] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.103 [2024-11-27 06:08:50.971058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.103 [2024-11-27 06:08:50.986687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.103 [2024-11-27 06:08:50.986732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.103 [2024-11-27 06:08:51.003697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.103 [2024-11-27 06:08:51.003748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.103 [2024-11-27 06:08:51.021248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.103 [2024-11-27 06:08:51.021291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.103 [2024-11-27 06:08:51.036680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.103 [2024-11-27 06:08:51.036940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.103 [2024-11-27 06:08:51.054349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.103 [2024-11-27 06:08:51.054543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.103 [2024-11-27 06:08:51.069914] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.103 [2024-11-27 06:08:51.070162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.103 [2024-11-27 06:08:51.086823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.103 [2024-11-27 06:08:51.087035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.103 [2024-11-27 06:08:51.099701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.103 [2024-11-27 06:08:51.099902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.103 [2024-11-27 06:08:51.118679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.103 [2024-11-27 06:08:51.118869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.103 [2024-11-27 06:08:51.135208] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.103 [2024-11-27 06:08:51.135393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.103 [2024-11-27 06:08:51.151532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.103 [2024-11-27 06:08:51.151717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.103 [2024-11-27 06:08:51.163957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.103 [2024-11-27 06:08:51.164180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.103 [2024-11-27 06:08:51.179522] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.103 [2024-11-27 06:08:51.179726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.103 [2024-11-27 06:08:51.196649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.103 [2024-11-27 06:08:51.196840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.362 [2024-11-27 06:08:51.212540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.362 [2024-11-27 06:08:51.212753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.362 [2024-11-27 06:08:51.229830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.362 [2024-11-27 06:08:51.229983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.362 10529.00 IOPS, 82.26 MiB/s [2024-11-27T06:08:51.459Z] [2024-11-27 06:08:51.246604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.362 [2024-11-27 06:08:51.246806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.362 [2024-11-27 06:08:51.263253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.362 [2024-11-27 06:08:51.263447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.362 [2024-11-27 06:08:51.280338] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.362 [2024-11-27 06:08:51.280494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.362 [2024-11-27 06:08:51.296086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.362 [2024-11-27 06:08:51.296271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.362 [2024-11-27 06:08:51.314448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.362 [2024-11-27 06:08:51.314627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.362 [2024-11-27 06:08:51.331086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.362 [2024-11-27 06:08:51.331310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.362 [2024-11-27 06:08:51.347640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.362 [2024-11-27 06:08:51.347676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.362 [2024-11-27 06:08:51.357173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.362 [2024-11-27 06:08:51.357353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.362 [2024-11-27 06:08:51.372190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.362 [2024-11-27 06:08:51.372222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.362 [2024-11-27 06:08:51.384511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.362 [2024-11-27 06:08:51.384551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.362 [2024-11-27 06:08:51.399268] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.362 [2024-11-27 06:08:51.399301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.362 [2024-11-27 06:08:51.415527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.362 [2024-11-27 06:08:51.415560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.362 [2024-11-27 06:08:51.431091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.362 [2024-11-27 06:08:51.431299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.362 [2024-11-27 06:08:51.449598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.362 [2024-11-27 06:08:51.449631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.621 [2024-11-27 06:08:51.464962] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.621 [2024-11-27 06:08:51.464995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.621 [2024-11-27 06:08:51.476265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.621 [2024-11-27 06:08:51.476298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.621 [2024-11-27 06:08:51.492687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.621 [2024-11-27 06:08:51.492721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.621 [2024-11-27 06:08:51.507582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.621 [2024-11-27 06:08:51.507614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.621 [2024-11-27 06:08:51.523607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.621 [2024-11-27 06:08:51.523641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.621 [2024-11-27 06:08:51.539640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.621 [2024-11-27 06:08:51.539689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.621 [2024-11-27 06:08:51.557949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.621 [2024-11-27 06:08:51.557986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.621 [2024-11-27 06:08:51.572519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.621 [2024-11-27 06:08:51.572560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.621 [2024-11-27 06:08:51.588067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.621 [2024-11-27 06:08:51.588100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.621 [2024-11-27 06:08:51.605709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.621 [2024-11-27 06:08:51.605771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.621 [2024-11-27 06:08:51.621765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.621 [2024-11-27 06:08:51.621801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.621 [2024-11-27 06:08:51.639470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.621 [2024-11-27 06:08:51.639520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.621 [2024-11-27 06:08:51.655820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.621 [2024-11-27 06:08:51.655857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.621 [2024-11-27 06:08:51.674898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.621 [2024-11-27 06:08:51.675165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.621 [2024-11-27 06:08:51.691121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.621 [2024-11-27 06:08:51.691425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.621 [2024-11-27 06:08:51.707082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.621 [2024-11-27 06:08:51.707331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.879 [2024-11-27 06:08:51.723778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.879 [2024-11-27 06:08:51.723827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.879 [2024-11-27 06:08:51.738057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.879 [2024-11-27 06:08:51.738188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.879 [2024-11-27 06:08:51.758750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.879 [2024-11-27 06:08:51.758953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.879 [2024-11-27 06:08:51.775444] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.879 [2024-11-27 06:08:51.775728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.879 [2024-11-27 06:08:51.791615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.879 [2024-11-27 06:08:51.791767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.879 [2024-11-27 06:08:51.801577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.879 [2024-11-27 06:08:51.801726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.879 [2024-11-27 06:08:51.817674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.879 [2024-11-27 06:08:51.817845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.879 [2024-11-27 06:08:51.832685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.879 [2024-11-27 06:08:51.832842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.879 [2024-11-27 06:08:51.848672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.879 [2024-11-27 06:08:51.848890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.879 [2024-11-27 06:08:51.867208] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.879 [2024-11-27 06:08:51.867413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.879 [2024-11-27 06:08:51.882401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.879 [2024-11-27 06:08:51.882592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.880 [2024-11-27 06:08:51.898880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.880 [2024-11-27 06:08:51.899046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.880 [2024-11-27 06:08:51.916223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.880 [2024-11-27 06:08:51.916421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.880 [2024-11-27 06:08:51.933208] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.880 [2024-11-27 06:08:51.933381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.880 [2024-11-27 06:08:51.948983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.880 [2024-11-27 06:08:51.949205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.880 [2024-11-27 06:08:51.967397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:46.880 [2024-11-27 06:08:51.967576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.138 [2024-11-27 06:08:51.982552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.138 [2024-11-27 06:08:51.982723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.138 [2024-11-27 06:08:52.001036] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.138 [2024-11-27 06:08:52.001233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.138 [2024-11-27 06:08:52.016315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.138 [2024-11-27 06:08:52.016491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.138 [2024-11-27 06:08:52.033918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.138 [2024-11-27 06:08:52.034069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.138 [2024-11-27 06:08:52.050438] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.138 [2024-11-27 06:08:52.050624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.138 [2024-11-27 06:08:52.066971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.138 [2024-11-27 06:08:52.067011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.138 [2024-11-27 06:08:52.085900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.138 [2024-11-27 06:08:52.085938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.138 [2024-11-27 06:08:52.099427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.138 [2024-11-27 06:08:52.099463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.138 [2024-11-27 06:08:52.115031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.138 [2024-11-27 06:08:52.115067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.138 [2024-11-27 06:08:52.133423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.138 [2024-11-27 06:08:52.133457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.138 [2024-11-27 06:08:52.148148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.138 [2024-11-27 06:08:52.148199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.138 [2024-11-27 06:08:52.165433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.138 [2024-11-27 06:08:52.165470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.138 [2024-11-27 06:08:52.181253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.138 [2024-11-27 06:08:52.181288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.138 [2024-11-27 06:08:52.199623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.138 [2024-11-27 06:08:52.199659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.138 [2024-11-27 06:08:52.214142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.138 [2024-11-27 06:08:52.214193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.138 [2024-11-27 06:08:52.229572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.138 [2024-11-27 06:08:52.229608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.396 10908.50 IOPS, 85.22 MiB/s [2024-11-27T06:08:52.493Z] [2024-11-27 06:08:52.244729] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.396 [2024-11-27 06:08:52.244765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.396 [2024-11-27 06:08:52.261261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.396 [2024-11-27 06:08:52.261297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.396 [2024-11-27 06:08:52.275981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.396 [2024-11-27 06:08:52.276175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.396 [2024-11-27 06:08:52.291023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.396 [2024-11-27 06:08:52.291068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.396 [2024-11-27 06:08:52.306483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.396 [2024-11-27 06:08:52.306517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.396 [2024-11-27 06:08:52.323510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.396 [2024-11-27 06:08:52.323734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.396 [2024-11-27 06:08:52.340373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.396 [2024-11-27 06:08:52.340413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.396 [2024-11-27 06:08:52.357284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.396 [2024-11-27 06:08:52.357319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.396 [2024-11-27 06:08:52.373208] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.396 [2024-11-27 06:08:52.373244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.396 [2024-11-27 06:08:52.389667] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.396 [2024-11-27 06:08:52.389701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.396 [2024-11-27 06:08:52.408127] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.396 [2024-11-27 06:08:52.408191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.396 [2024-11-27 06:08:52.426808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.396 [2024-11-27 06:08:52.427050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.397 [2024-11-27 06:08:52.443672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.397 [2024-11-27 06:08:52.443725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.397 [2024-11-27 06:08:52.461014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.397 [2024-11-27 06:08:52.461076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.397 [2024-11-27 06:08:52.477869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.397 [2024-11-27 06:08:52.478050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.397 [2024-11-27 06:08:52.489399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.397 [2024-11-27 06:08:52.489440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.654 [2024-11-27 06:08:52.506705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.654 [2024-11-27 06:08:52.506751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.654 [2024-11-27 06:08:52.523820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.654 [2024-11-27 06:08:52.523868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.654 [2024-11-27 06:08:52.540421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.654 [2024-11-27 06:08:52.540625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.654 [2024-11-27 06:08:52.552442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.654 [2024-11-27 06:08:52.552484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.654 [2024-11-27 06:08:52.568863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.654 [2024-11-27 06:08:52.568913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.654 [2024-11-27 06:08:52.585451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.654 [2024-11-27 06:08:52.585501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.654 [2024-11-27 06:08:52.602040] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.654 [2024-11-27 06:08:52.602089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.654 [2024-11-27 06:08:52.617519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.654 [2024-11-27 06:08:52.617562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.654 [2024-11-27 06:08:52.634155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.655 [2024-11-27 06:08:52.634195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.655 [2024-11-27 06:08:52.645846] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.655 [2024-11-27 06:08:52.645889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.655 [2024-11-27 06:08:52.662715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.655 [2024-11-27 06:08:52.662759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.655 [2024-11-27 06:08:52.679365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.655 [2024-11-27 06:08:52.679574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.655 [2024-11-27 06:08:52.695576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.655 [2024-11-27 06:08:52.695618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.655 [2024-11-27 06:08:52.710901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.655 [2024-11-27 06:08:52.711112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.655 [2024-11-27 06:08:52.726889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.655 [2024-11-27 06:08:52.726933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.655 [2024-11-27 06:08:52.743630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.655 [2024-11-27 06:08:52.743673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.912 [2024-11-27 06:08:52.760247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.912 [2024-11-27 06:08:52.760287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.912 [2024-11-27 06:08:52.775737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.912 [2024-11-27 06:08:52.775780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.912 [2024-11-27 06:08:52.791542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.913 [2024-11-27 06:08:52.791585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.913 [2024-11-27 06:08:52.808841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.913 [2024-11-27 06:08:52.808884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.913 [2024-11-27 06:08:52.825988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.913 [2024-11-27 06:08:52.826275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.913 [2024-11-27 06:08:52.841966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.913 [2024-11-27 06:08:52.842199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.913 [2024-11-27 06:08:52.857854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.913 [2024-11-27 06:08:52.858034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.913 [2024-11-27 06:08:52.874969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.913 [2024-11-27 06:08:52.875013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.913 [2024-11-27 06:08:52.891863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.913 [2024-11-27 06:08:52.891905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.913 [2024-11-27 06:08:52.907945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.913 [2024-11-27 06:08:52.907988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.913 [2024-11-27 06:08:52.925375] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.913 [2024-11-27 06:08:52.925416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.913 [2024-11-27 06:08:52.942263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.913 [2024-11-27 06:08:52.942306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.913 [2024-11-27 06:08:52.958860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.913 [2024-11-27 06:08:52.958902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.913 [2024-11-27 06:08:52.975881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.913 [2024-11-27 06:08:52.975915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.913 [2024-11-27 06:08:52.994321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.913 [2024-11-27 06:08:52.994351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.172 [2024-11-27 06:08:53.009069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.172 [2024-11-27 06:08:53.009277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.172 [2024-11-27 06:08:53.026128] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.172 [2024-11-27 06:08:53.026202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.172 [2024-11-27 06:08:53.042163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.172 [2024-11-27 06:08:53.042223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.172 [2024-11-27 06:08:53.060852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.172 [2024-11-27 06:08:53.061008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.172 [2024-11-27 06:08:53.075779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.172 [2024-11-27 06:08:53.075959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.172 [2024-11-27 06:08:53.092476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.172 [2024-11-27 06:08:53.092525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.172 [2024-11-27 06:08:53.109471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.172 [2024-11-27 06:08:53.109503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.172 [2024-11-27 06:08:53.125427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.172 [2024-11-27 06:08:53.125468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.172 [2024-11-27 06:08:53.141684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.172 [2024-11-27 06:08:53.141742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.172 [2024-11-27 06:08:53.158302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.172 [2024-11-27 06:08:53.158484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.172 [2024-11-27 06:08:53.176814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.172 [2024-11-27 06:08:53.176849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.172 [2024-11-27 06:08:53.192486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.172 [2024-11-27 06:08:53.192517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.172 [2024-11-27 06:08:53.209471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.172 [2024-11-27 06:08:53.209671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.172 [2024-11-27 06:08:53.225284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.172 [2024-11-27 06:08:53.225317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.172 10685.00 IOPS, 83.48 MiB/s [2024-11-27T06:08:53.269Z] [2024-11-27 06:08:53.234852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.172 [2024-11-27 06:08:53.234886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.172 [2024-11-27 06:08:53.250395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.172 [2024-11-27 06:08:53.250431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.430 [2024-11-27 06:08:53.268195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.430 [2024-11-27 06:08:53.268235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.430 [2024-11-27 06:08:53.282232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.430 [2024-11-27 06:08:53.282277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.430 [2024-11-27 06:08:53.298947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.430 [2024-11-27 06:08:53.298982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.430 [2024-11-27 06:08:53.314011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.430 [2024-11-27 06:08:53.314046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.430 [2024-11-27 06:08:53.330419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.430 [2024-11-27 06:08:53.330584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.430 [2024-11-27 06:08:53.347461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.430 [2024-11-27 06:08:53.347514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.430 [2024-11-27 06:08:53.363413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.430 [2024-11-27 06:08:53.363448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.430 [2024-11-27 06:08:53.381780] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.430 [2024-11-27 06:08:53.381962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.430 [2024-11-27 06:08:53.396253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.430 [2024-11-27 06:08:53.396298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.430 [2024-11-27 06:08:53.411917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.430 [2024-11-27 06:08:53.412080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.430 [2024-11-27 06:08:53.422394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.430 [2024-11-27 06:08:53.422598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.430 [2024-11-27 06:08:53.438297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.430 [2024-11-27 06:08:53.438331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.430 [2024-11-27 06:08:53.453959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.430 [2024-11-27 06:08:53.454152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.430 [2024-11-27 06:08:53.470544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.430 [2024-11-27 06:08:53.470579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.430 [2024-11-27 06:08:53.486945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.430 [2024-11-27 06:08:53.486981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.430 [2024-11-27 06:08:53.504095] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.430 [2024-11-27 06:08:53.504145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.430 [2024-11-27 06:08:53.519891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.430 [2024-11-27 06:08:53.519926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.688 [2024-11-27 06:08:53.530328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.688 [2024-11-27 06:08:53.530376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.688 [2024-11-27 06:08:53.546633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.688 [2024-11-27 06:08:53.546668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.688 [2024-11-27 06:08:53.561544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.688 [2024-11-27 06:08:53.561578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.688 [2024-11-27 06:08:53.576401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.688 [2024-11-27 06:08:53.576618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.688 [2024-11-27 06:08:53.591899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.688 [2024-11-27 06:08:53.592074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.688 [2024-11-27 06:08:53.600954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.688 [2024-11-27 06:08:53.600987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.688 [2024-11-27 06:08:53.616927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.688 [2024-11-27 06:08:53.616960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.688 [2024-11-27 06:08:53.626532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.688 [2024-11-27 06:08:53.626719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.688 [2024-11-27 06:08:53.642573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.688 [2024-11-27 06:08:53.642607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.688 [2024-11-27 06:08:53.659349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.688 [2024-11-27 06:08:53.659379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.688 [2024-11-27 06:08:53.675787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.688 [2024-11-27 06:08:53.675820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.688 [2024-11-27 06:08:53.692796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.688 [2024-11-27 06:08:53.692825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.688 [2024-11-27 06:08:53.708808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.688 [2024-11-27 06:08:53.708835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.688 [2024-11-27 06:08:53.726856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.688 [2024-11-27 06:08:53.726884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.688 [2024-11-27 06:08:53.741074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.688 [2024-11-27 06:08:53.741102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.688 [2024-11-27 06:08:53.757906] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.688 [2024-11-27 06:08:53.757935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.688 [2024-11-27 06:08:53.774610] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.688 [2024-11-27 06:08:53.774637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.947 [2024-11-27 06:08:53.789473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.947 [2024-11-27 06:08:53.789517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.947 [2024-11-27 06:08:53.805328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.947 [2024-11-27 06:08:53.805355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.947 [2024-11-27 06:08:53.822817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.947 [2024-11-27 06:08:53.822845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.947 [2024-11-27 06:08:53.837999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.947 [2024-11-27 06:08:53.838028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.947 [2024-11-27 06:08:53.847318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.947 [2024-11-27 06:08:53.847344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.947 [2024-11-27 06:08:53.862759] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.947 [2024-11-27 06:08:53.862802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.947 [2024-11-27 06:08:53.880673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.947 [2024-11-27 06:08:53.880731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.947 [2024-11-27 06:08:53.895660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.947 [2024-11-27 06:08:53.895692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.947 [2024-11-27 06:08:53.904958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.947 [2024-11-27 06:08:53.904985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.947 [2024-11-27 06:08:53.920767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.947 [2024-11-27 06:08:53.920795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.947 [2024-11-27 06:08:53.932484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.947 [2024-11-27 06:08:53.932526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.947 [2024-11-27 06:08:53.949540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.947 [2024-11-27 06:08:53.949567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.947 [2024-11-27 06:08:53.965475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.947 [2024-11-27 06:08:53.965501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.947 [2024-11-27 06:08:53.983405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.947 [2024-11-27 06:08:53.983432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.947 [2024-11-27 06:08:53.997448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.947 [2024-11-27 06:08:53.997475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.947 [2024-11-27 06:08:54.013432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.947 [2024-11-27 06:08:54.013459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.947 [2024-11-27 06:08:54.030027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.947 [2024-11-27 06:08:54.030071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.205 [2024-11-27 06:08:54.047852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.205 [2024-11-27 06:08:54.047879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.205 [2024-11-27 06:08:54.062985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.205 [2024-11-27 06:08:54.063019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.205 [2024-11-27 06:08:54.072459] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.205 [2024-11-27 06:08:54.072487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.205 [2024-11-27 06:08:54.087999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.205 [2024-11-27 06:08:54.088035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.205 [2024-11-27 06:08:54.103719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.205 [2024-11-27 06:08:54.103746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.205 [2024-11-27 06:08:54.120422] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.205 [2024-11-27 06:08:54.120449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.205 [2024-11-27 06:08:54.135672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.205 [2024-11-27 06:08:54.135698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.205 [2024-11-27 06:08:54.146424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.205 [2024-11-27 06:08:54.146453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.205 [2024-11-27 06:08:54.162841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.205 [2024-11-27 06:08:54.162868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.205 [2024-11-27 06:08:54.179285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.205 [2024-11-27 06:08:54.179312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.205 [2024-11-27 06:08:54.189935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.205 [2024-11-27 06:08:54.189965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.205 [2024-11-27 06:08:54.204646] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.205 [2024-11-27 06:08:54.204688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.205 [2024-11-27 06:08:54.214659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.205 [2024-11-27 06:08:54.214704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.205 [2024-11-27 06:08:54.228668] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.205 [2024-11-27 06:08:54.228695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.205 10988.00 IOPS, 85.84 MiB/s [2024-11-27T06:08:54.302Z] [2024-11-27 06:08:54.243045] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.205 [2024-11-27 06:08:54.243072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.205 [2024-11-27 06:08:54.258035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.205 [2024-11-27 06:08:54.258094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.205 [2024-11-27 06:08:54.267090] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.205 [2024-11-27 06:08:54.267117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.205 [2024-11-27 06:08:54.282125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.205 [2024-11-27 06:08:54.282178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.205 [2024-11-27 06:08:54.297225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.205 [2024-11-27 06:08:54.297253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.463 [2024-11-27 06:08:54.314932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.463 [2024-11-27 06:08:54.314959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.463 [2024-11-27 06:08:54.330013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.463 [2024-11-27 06:08:54.330073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.463 [2024-11-27 06:08:54.340231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.463 [2024-11-27 06:08:54.340288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.463 [2024-11-27 06:08:54.356112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.463 [2024-11-27 06:08:54.356149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.463 [2024-11-27 06:08:54.373494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.463 [2024-11-27 06:08:54.373537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.463 [2024-11-27 06:08:54.389347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.463 [2024-11-27 06:08:54.389406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.463 [2024-11-27 06:08:54.405883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.463 [2024-11-27 06:08:54.405914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.463 [2024-11-27 06:08:54.424035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.463 [2024-11-27 06:08:54.424063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.463 [2024-11-27 06:08:54.439571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.463 [2024-11-27 06:08:54.439597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.463 [2024-11-27 06:08:54.457872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.463 [2024-11-27 06:08:54.457899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.463 [2024-11-27 06:08:54.474206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.463 [2024-11-27 06:08:54.474232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.463 [2024-11-27 06:08:54.491876] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.463 [2024-11-27 06:08:54.491907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.463 [2024-11-27 06:08:54.507187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.463 [2024-11-27 06:08:54.507228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.463 [2024-11-27 06:08:54.523855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.463 [2024-11-27 06:08:54.523885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.463 [2024-11-27 06:08:54.540788] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.463 [2024-11-27 06:08:54.540817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.463 [2024-11-27 06:08:54.557755] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.463 [2024-11-27 06:08:54.557786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.721 [2024-11-27 06:08:54.574626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.721 [2024-11-27 06:08:54.574657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.721 [2024-11-27 06:08:54.590473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.721 [2024-11-27 06:08:54.590517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.721 [2024-11-27 06:08:54.600423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.721 [2024-11-27 06:08:54.600452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.721 [2024-11-27 06:08:54.615798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.721 [2024-11-27 06:08:54.615828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.721 [2024-11-27 06:08:54.626207] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.721 [2024-11-27 06:08:54.626244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.721 [2024-11-27 06:08:54.641701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.721 [2024-11-27 06:08:54.641776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.721 [2024-11-27 06:08:54.658424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.721 [2024-11-27 06:08:54.658452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.721 [2024-11-27 06:08:54.675024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.721 [2024-11-27 06:08:54.675055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.721 [2024-11-27 06:08:54.691775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.721 [2024-11-27 06:08:54.691805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.721 [2024-11-27 06:08:54.709515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.721 [2024-11-27 06:08:54.709552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.721 [2024-11-27 06:08:54.725095] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.721 [2024-11-27 06:08:54.725124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.721 [2024-11-27 06:08:54.734620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.721 [2024-11-27 06:08:54.734647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.721 [2024-11-27 06:08:54.750950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.721 [2024-11-27 06:08:54.750978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.721 [2024-11-27 06:08:54.769113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.721 [2024-11-27 06:08:54.769167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.721 [2024-11-27 06:08:54.782973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.721 [2024-11-27 06:08:54.783001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.721 [2024-11-27 06:08:54.797956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.721 [2024-11-27 06:08:54.797985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.721 [2024-11-27 06:08:54.807411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.721 [2024-11-27 06:08:54.807440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.979 [2024-11-27 06:08:54.822405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.979 [2024-11-27 06:08:54.822434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.979 [2024-11-27 06:08:54.838194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.979 [2024-11-27 06:08:54.838221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.979 [2024-11-27 06:08:54.849123] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.980 [2024-11-27 06:08:54.849175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.980 [2024-11-27 06:08:54.865867] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.980 [2024-11-27 06:08:54.865889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.980 [2024-11-27 06:08:54.881756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.980 [2024-11-27 06:08:54.881787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.980 [2024-11-27 06:08:54.899379] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.980 [2024-11-27 06:08:54.899407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.980 [2024-11-27 06:08:54.916650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.980 [2024-11-27 06:08:54.916677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.980 [2024-11-27 06:08:54.931461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.980 [2024-11-27 06:08:54.931525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.980 [2024-11-27 06:08:54.948233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.980 [2024-11-27 06:08:54.948255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.980 [2024-11-27 06:08:54.964755] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.980 [2024-11-27 06:08:54.964814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.980 [2024-11-27 06:08:54.981437] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.980 [2024-11-27 06:08:54.981467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.980 [2024-11-27 06:08:54.998368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.980 [2024-11-27 06:08:54.998398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.980 [2024-11-27 06:08:55.014919] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.980 [2024-11-27 06:08:55.014947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.980 [2024-11-27 06:08:55.031096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.980 [2024-11-27 06:08:55.031169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.980 [2024-11-27 06:08:55.047271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.980 [2024-11-27 06:08:55.047314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.980 [2024-11-27 06:08:55.066237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.980 [2024-11-27 06:08:55.066264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.238 [2024-11-27 06:08:55.081605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.238 [2024-11-27 06:08:55.081634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.238 [2024-11-27 06:08:55.098926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.238 [2024-11-27 06:08:55.098956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.238 [2024-11-27 06:08:55.116546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.238 [2024-11-27 06:08:55.116574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.238 [2024-11-27 06:08:55.133000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.238 [2024-11-27 06:08:55.133029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.238 [2024-11-27 06:08:55.149629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.238 [2024-11-27 06:08:55.149657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.238 [2024-11-27 06:08:55.165901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.238 [2024-11-27 06:08:55.165930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.238 [2024-11-27 06:08:55.183476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.238 [2024-11-27 06:08:55.183505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.238 [2024-11-27 06:08:55.198505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.238 [2024-11-27 06:08:55.198535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.238 [2024-11-27 06:08:55.216593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.238 [2024-11-27 06:08:55.216623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.238 [2024-11-27 06:08:55.231334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.238 [2024-11-27 06:08:55.231363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.238 11135.80 IOPS, 87.00 MiB/s [2024-11-27T06:08:55.335Z] [2024-11-27 06:08:55.242395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.238 [2024-11-27 06:08:55.242425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.238 00:13:50.238 Latency(us) 00:13:50.238 [2024-11-27T06:08:55.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.238 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:50.238 Nvme1n1 : 5.01 11139.23 87.03 0.00 0.00 11477.46 4319.42 25261.15 00:13:50.238 [2024-11-27T06:08:55.335Z] =================================================================================================================== 00:13:50.238 [2024-11-27T06:08:55.335Z] Total : 11139.23 87.03 0.00 0.00 11477.46 4319.42 25261.15 00:13:50.238 [2024-11-27 06:08:55.254342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.238 [2024-11-27 06:08:55.254370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.238 [2024-11-27 06:08:55.266337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.238 [2024-11-27 06:08:55.266361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.238 [2024-11-27 06:08:55.278343] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.238 [2024-11-27 06:08:55.278368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.238 [2024-11-27 06:08:55.290348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.238 [2024-11-27 06:08:55.290374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.238 [2024-11-27 06:08:55.302381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.238 [2024-11-27 06:08:55.302406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.238 [2024-11-27 06:08:55.314353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.238 [2024-11-27 06:08:55.314378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.238 [2024-11-27 06:08:55.326381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.238 [2024-11-27 06:08:55.326405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.499 [2024-11-27 06:08:55.338353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.499 [2024-11-27 06:08:55.338377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.499 [2024-11-27 06:08:55.350356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.499 [2024-11-27 06:08:55.350379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.499 [2024-11-27 06:08:55.362356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.499 [2024-11-27 06:08:55.362396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.499 [2024-11-27 06:08:55.374357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.499 [2024-11-27 06:08:55.374380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.499 [2024-11-27 06:08:55.386361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.499 [2024-11-27 06:08:55.386383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.499 [2024-11-27 06:08:55.398364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.499 [2024-11-27 06:08:55.398386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.499 [2024-11-27 06:08:55.410411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.499 [2024-11-27 06:08:55.410435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.499 [2024-11-27 06:08:55.422391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.499 [2024-11-27 06:08:55.422416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.499 [2024-11-27 06:08:55.434408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.499 [2024-11-27 06:08:55.434433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.499 [2024-11-27 06:08:55.446412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.499 [2024-11-27 06:08:55.446435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.499 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65824) - No such process 00:13:50.499 06:08:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65824 00:13:50.499 06:08:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.499 06:08:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.499 06:08:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:50.499 06:08:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.499 06:08:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:50.499 06:08:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.499 06:08:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:50.499 delay0 00:13:50.499 06:08:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.499 06:08:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:50.499 06:08:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.499 06:08:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:50.499 06:08:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.499 06:08:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:13:50.758 [2024-11-27 06:08:55.646308] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:57.319 Initializing NVMe Controllers 00:13:57.319 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:57.319 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:57.319 Initialization complete. Launching workers. 00:13:57.319 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 87 00:13:57.319 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 374, failed to submit 33 00:13:57.319 success 246, unsuccessful 128, failed 0 00:13:57.319 06:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:57.319 06:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:57.319 06:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:57.319 06:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:13:57.319 06:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:57.319 06:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:13:57.319 06:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:57.319 06:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:57.319 rmmod nvme_tcp 00:13:57.319 rmmod nvme_fabrics 00:13:57.319 rmmod nvme_keyring 00:13:57.319 06:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:57.319 06:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:13:57.319 06:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:13:57.319 06:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65668 ']' 00:13:57.319 06:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65668 00:13:57.319 06:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65668 ']' 00:13:57.319 06:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65668 00:13:57.319 06:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:13:57.319 06:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:57.319 06:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65668 00:13:57.319 06:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:57.319 06:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:57.319 killing process with pid 65668 00:13:57.319 06:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65668' 00:13:57.319 06:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65668 00:13:57.319 06:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65668 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.319 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:57.320 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.320 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:13:57.320 00:13:57.320 real 0m25.102s 00:13:57.320 user 0m40.081s 00:13:57.320 sys 0m7.523s 00:13:57.320 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.320 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:57.320 ************************************ 00:13:57.320 END TEST nvmf_zcopy 00:13:57.320 ************************************ 00:13:57.578 06:09:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:57.578 06:09:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:57.578 06:09:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.578 06:09:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:57.578 ************************************ 00:13:57.579 START TEST nvmf_nmic 00:13:57.579 ************************************ 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:57.579 * Looking for test storage... 00:13:57.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:57.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.579 --rc genhtml_branch_coverage=1 00:13:57.579 --rc genhtml_function_coverage=1 00:13:57.579 --rc genhtml_legend=1 00:13:57.579 --rc geninfo_all_blocks=1 00:13:57.579 --rc geninfo_unexecuted_blocks=1 00:13:57.579 00:13:57.579 ' 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:57.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.579 --rc genhtml_branch_coverage=1 00:13:57.579 --rc genhtml_function_coverage=1 00:13:57.579 --rc genhtml_legend=1 00:13:57.579 --rc geninfo_all_blocks=1 00:13:57.579 --rc geninfo_unexecuted_blocks=1 00:13:57.579 00:13:57.579 ' 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:57.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.579 --rc genhtml_branch_coverage=1 00:13:57.579 --rc genhtml_function_coverage=1 00:13:57.579 --rc genhtml_legend=1 00:13:57.579 --rc geninfo_all_blocks=1 00:13:57.579 --rc geninfo_unexecuted_blocks=1 00:13:57.579 00:13:57.579 ' 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:57.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.579 --rc genhtml_branch_coverage=1 00:13:57.579 --rc genhtml_function_coverage=1 00:13:57.579 --rc genhtml_legend=1 00:13:57.579 --rc geninfo_all_blocks=1 00:13:57.579 --rc geninfo_unexecuted_blocks=1 00:13:57.579 00:13:57.579 ' 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.579 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:57.580 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:57.580 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:57.839 Cannot find device "nvmf_init_br" 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:57.839 Cannot find device "nvmf_init_br2" 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:57.839 Cannot find device "nvmf_tgt_br" 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:57.839 Cannot find device "nvmf_tgt_br2" 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:57.839 Cannot find device "nvmf_init_br" 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:57.839 Cannot find device "nvmf_init_br2" 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:57.839 Cannot find device "nvmf_tgt_br" 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:57.839 Cannot find device "nvmf_tgt_br2" 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:57.839 Cannot find device "nvmf_br" 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:57.839 Cannot find device "nvmf_init_if" 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:57.839 Cannot find device "nvmf_init_if2" 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:57.839 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:57.839 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:57.839 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:58.099 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:58.099 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:58.099 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:58.099 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:58.099 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:58.099 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:58.099 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:58.099 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:58.099 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:58.100 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:58.100 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:13:58.100 00:13:58.100 --- 10.0.0.3 ping statistics --- 00:13:58.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.100 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:58.100 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:58.100 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.090 ms 00:13:58.100 00:13:58.100 --- 10.0.0.4 ping statistics --- 00:13:58.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.100 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:58.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:58.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:13:58.100 00:13:58.100 --- 10.0.0.1 ping statistics --- 00:13:58.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.100 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:58.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:58.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:13:58.100 00:13:58.100 --- 10.0.0.2 ping statistics --- 00:13:58.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.100 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=66210 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 66210 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 66210 ']' 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.100 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:58.100 [2024-11-27 06:09:03.147246] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:13:58.100 [2024-11-27 06:09:03.147341] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.359 [2024-11-27 06:09:03.303930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:58.359 [2024-11-27 06:09:03.398595] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.359 [2024-11-27 06:09:03.398666] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.359 [2024-11-27 06:09:03.398702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.359 [2024-11-27 06:09:03.398721] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.359 [2024-11-27 06:09:03.398735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.359 [2024-11-27 06:09:03.399919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.359 [2024-11-27 06:09:03.400085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.359 [2024-11-27 06:09:03.400191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.359 [2024-11-27 06:09:03.400191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:58.619 [2024-11-27 06:09:03.459388] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:58.619 [2024-11-27 06:09:03.578153] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:58.619 Malloc0 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:58.619 [2024-11-27 06:09:03.647278] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:58.619 test case1: single bdev can't be used in multiple subsystems 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.619 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:58.619 [2024-11-27 06:09:03.675112] bdev.c:8507:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:58.619 [2024-11-27 06:09:03.675163] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:58.619 [2024-11-27 06:09:03.675177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.619 request: 00:13:58.619 { 00:13:58.619 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:58.619 "namespace": { 00:13:58.619 "bdev_name": "Malloc0", 00:13:58.619 "no_auto_visible": false, 00:13:58.619 "hide_metadata": false 00:13:58.620 }, 00:13:58.620 "method": "nvmf_subsystem_add_ns", 00:13:58.620 "req_id": 1 00:13:58.620 } 00:13:58.620 Got JSON-RPC error response 00:13:58.620 response: 00:13:58.620 { 00:13:58.620 "code": -32602, 00:13:58.620 "message": "Invalid parameters" 00:13:58.620 } 00:13:58.620 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:58.620 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:58.620 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:58.620 Adding namespace failed - expected result. 00:13:58.620 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:58.620 test case2: host connect to nvmf target in multiple paths 00:13:58.620 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:58.620 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:13:58.620 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.620 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:58.620 [2024-11-27 06:09:03.687252] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:13:58.620 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.620 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid=34bde053-797d-42f4-ad97-2a3b315837d0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:13:58.878 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid=34bde053-797d-42f4-ad97-2a3b315837d0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:13:58.878 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:58.878 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:13:58.878 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:58.878 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:58.878 06:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:14:01.410 06:09:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:01.410 06:09:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:01.410 06:09:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:01.410 06:09:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:01.410 06:09:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:01.410 06:09:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:14:01.410 06:09:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:01.410 [global] 00:14:01.410 thread=1 00:14:01.410 invalidate=1 00:14:01.410 rw=write 00:14:01.410 time_based=1 00:14:01.410 runtime=1 00:14:01.410 ioengine=libaio 00:14:01.410 direct=1 00:14:01.410 bs=4096 00:14:01.410 iodepth=1 00:14:01.410 norandommap=0 00:14:01.410 numjobs=1 00:14:01.410 00:14:01.410 verify_dump=1 00:14:01.410 verify_backlog=512 00:14:01.410 verify_state_save=0 00:14:01.410 do_verify=1 00:14:01.410 verify=crc32c-intel 00:14:01.410 [job0] 00:14:01.410 filename=/dev/nvme0n1 00:14:01.410 Could not set queue depth (nvme0n1) 00:14:01.411 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:01.411 fio-3.35 00:14:01.411 Starting 1 thread 00:14:02.372 00:14:02.372 job0: (groupid=0, jobs=1): err= 0: pid=66291: Wed Nov 27 06:09:07 2024 00:14:02.372 read: IOPS=2728, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1001msec) 00:14:02.372 slat (nsec): min=11256, max=85345, avg=14288.59, stdev=3996.33 00:14:02.372 clat (usec): min=136, max=6583, avg=198.91, stdev=236.11 00:14:02.372 lat (usec): min=150, max=6595, avg=213.20, stdev=236.84 00:14:02.372 clat percentiles (usec): 00:14:02.372 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 167], 00:14:02.372 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 190], 00:14:02.372 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 215], 95.00th=[ 223], 00:14:02.372 | 99.00th=[ 245], 99.50th=[ 314], 99.90th=[ 5014], 99.95th=[ 5080], 00:14:02.372 | 99.99th=[ 6587] 00:14:02.372 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:14:02.372 slat (nsec): min=16793, max=81745, avg=21869.39, stdev=6014.38 00:14:02.372 clat (usec): min=83, max=363, avg=110.80, stdev=16.26 00:14:02.372 lat (usec): min=104, max=387, avg=132.67, stdev=17.71 00:14:02.372 clat percentiles (usec): 00:14:02.372 | 1.00th=[ 87], 5.00th=[ 92], 10.00th=[ 94], 20.00th=[ 99], 00:14:02.372 | 30.00th=[ 102], 40.00th=[ 105], 50.00th=[ 109], 60.00th=[ 113], 00:14:02.372 | 70.00th=[ 116], 80.00th=[ 121], 90.00th=[ 130], 95.00th=[ 139], 00:14:02.372 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 258], 99.95th=[ 260], 00:14:02.372 | 99.99th=[ 363] 00:14:02.372 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:14:02.372 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:02.372 lat (usec) : 100=12.48%, 250=87.02%, 500=0.33%, 750=0.02% 00:14:02.372 lat (msec) : 4=0.07%, 10=0.09% 00:14:02.372 cpu : usr=1.80%, sys=8.80%, ctx=5804, majf=0, minf=5 00:14:02.372 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.372 issued rwts: total=2731,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.372 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:02.372 00:14:02.372 Run status group 0 (all jobs): 00:14:02.372 READ: bw=10.7MiB/s (11.2MB/s), 10.7MiB/s-10.7MiB/s (11.2MB/s-11.2MB/s), io=10.7MiB (11.2MB), run=1001-1001msec 00:14:02.372 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:14:02.372 00:14:02.372 Disk stats (read/write): 00:14:02.372 nvme0n1: ios=2593/2560, merge=0/0, ticks=512/317, in_queue=829, util=90.18% 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:02.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:02.372 rmmod nvme_tcp 00:14:02.372 rmmod nvme_fabrics 00:14:02.372 rmmod nvme_keyring 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 66210 ']' 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 66210 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 66210 ']' 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 66210 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.372 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66210 00:14:02.633 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:02.633 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:02.633 killing process with pid 66210 00:14:02.633 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66210' 00:14:02.633 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 66210 00:14:02.633 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 66210 00:14:02.633 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:02.633 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:02.633 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:02.633 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:14:02.633 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:14:02.633 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:02.633 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:14:02.633 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:02.633 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:02.633 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:02.891 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:02.891 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:02.891 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:02.891 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:02.891 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:02.891 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:02.891 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:02.891 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:02.891 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:02.891 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:02.891 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:02.891 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:02.891 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:02.891 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.891 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:02.891 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.891 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:14:02.891 00:14:02.891 real 0m5.523s 00:14:02.891 user 0m15.907s 00:14:02.891 sys 0m2.463s 00:14:02.891 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.891 ************************************ 00:14:02.891 06:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:02.891 END TEST nvmf_nmic 00:14:02.891 ************************************ 00:14:03.151 06:09:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:03.151 06:09:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:03.151 06:09:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.151 06:09:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:03.151 ************************************ 00:14:03.151 START TEST nvmf_fio_target 00:14:03.151 ************************************ 00:14:03.151 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:03.151 * Looking for test storage... 00:14:03.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:03.151 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:03.151 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:14:03.151 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:03.151 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:03.151 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:03.151 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:03.151 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:03.151 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:03.151 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:03.151 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:03.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.152 --rc genhtml_branch_coverage=1 00:14:03.152 --rc genhtml_function_coverage=1 00:14:03.152 --rc genhtml_legend=1 00:14:03.152 --rc geninfo_all_blocks=1 00:14:03.152 --rc geninfo_unexecuted_blocks=1 00:14:03.152 00:14:03.152 ' 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:03.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.152 --rc genhtml_branch_coverage=1 00:14:03.152 --rc genhtml_function_coverage=1 00:14:03.152 --rc genhtml_legend=1 00:14:03.152 --rc geninfo_all_blocks=1 00:14:03.152 --rc geninfo_unexecuted_blocks=1 00:14:03.152 00:14:03.152 ' 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:03.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.152 --rc genhtml_branch_coverage=1 00:14:03.152 --rc genhtml_function_coverage=1 00:14:03.152 --rc genhtml_legend=1 00:14:03.152 --rc geninfo_all_blocks=1 00:14:03.152 --rc geninfo_unexecuted_blocks=1 00:14:03.152 00:14:03.152 ' 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:03.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.152 --rc genhtml_branch_coverage=1 00:14:03.152 --rc genhtml_function_coverage=1 00:14:03.152 --rc genhtml_legend=1 00:14:03.152 --rc geninfo_all_blocks=1 00:14:03.152 --rc geninfo_unexecuted_blocks=1 00:14:03.152 00:14:03.152 ' 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:03.152 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:03.152 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:03.153 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:03.412 Cannot find device "nvmf_init_br" 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:03.412 Cannot find device "nvmf_init_br2" 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:03.412 Cannot find device "nvmf_tgt_br" 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:03.412 Cannot find device "nvmf_tgt_br2" 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:03.412 Cannot find device "nvmf_init_br" 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:03.412 Cannot find device "nvmf_init_br2" 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:03.412 Cannot find device "nvmf_tgt_br" 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:03.412 Cannot find device "nvmf_tgt_br2" 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:03.412 Cannot find device "nvmf_br" 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:03.412 Cannot find device "nvmf_init_if" 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:03.412 Cannot find device "nvmf_init_if2" 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:03.412 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:03.412 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:03.412 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:03.671 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:03.671 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:03.671 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:03.671 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:03.671 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:03.671 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:03.671 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:03.671 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:03.671 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:03.671 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:03.671 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:03.671 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:03.671 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:03.671 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:03.672 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:03.672 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:14:03.672 00:14:03.672 --- 10.0.0.3 ping statistics --- 00:14:03.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.672 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:03.672 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:03.672 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:14:03.672 00:14:03.672 --- 10.0.0.4 ping statistics --- 00:14:03.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.672 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:03.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:03.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:14:03.672 00:14:03.672 --- 10.0.0.1 ping statistics --- 00:14:03.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.672 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:03.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:03.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:14:03.672 00:14:03.672 --- 10.0.0.2 ping statistics --- 00:14:03.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.672 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66522 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66522 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66522 ']' 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:03.672 06:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.672 [2024-11-27 06:09:08.724803] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:14:03.672 [2024-11-27 06:09:08.724892] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.931 [2024-11-27 06:09:08.875857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:03.931 [2024-11-27 06:09:08.945850] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.931 [2024-11-27 06:09:08.945912] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.931 [2024-11-27 06:09:08.945927] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.931 [2024-11-27 06:09:08.945938] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.931 [2024-11-27 06:09:08.945947] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.931 [2024-11-27 06:09:08.947297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.931 [2024-11-27 06:09:08.947430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.931 [2024-11-27 06:09:08.947544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:03.931 [2024-11-27 06:09:08.947546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.931 [2024-11-27 06:09:09.008567] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:04.190 06:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.190 06:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:14:04.190 06:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:04.190 06:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:04.190 06:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.190 06:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.190 06:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:04.448 [2024-11-27 06:09:09.441285] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.448 06:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:04.706 06:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:04.706 06:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:05.273 06:09:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:05.273 06:09:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:05.531 06:09:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:05.531 06:09:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:05.789 06:09:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:05.789 06:09:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:06.048 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:06.307 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:06.307 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:06.565 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:06.565 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:07.132 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:07.132 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:07.390 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:07.651 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:07.651 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:07.910 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:07.910 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:08.168 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:08.427 [2024-11-27 06:09:13.303528] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:08.427 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:08.685 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:08.944 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid=34bde053-797d-42f4-ad97-2a3b315837d0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:08.944 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:08.944 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:14:08.944 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:08.944 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:14:08.944 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:14:08.944 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:14:10.849 06:09:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:10.849 06:09:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:10.849 06:09:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:11.108 06:09:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:14:11.108 06:09:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:11.108 06:09:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:14:11.108 06:09:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:11.108 [global] 00:14:11.108 thread=1 00:14:11.108 invalidate=1 00:14:11.108 rw=write 00:14:11.108 time_based=1 00:14:11.108 runtime=1 00:14:11.108 ioengine=libaio 00:14:11.108 direct=1 00:14:11.108 bs=4096 00:14:11.108 iodepth=1 00:14:11.108 norandommap=0 00:14:11.108 numjobs=1 00:14:11.108 00:14:11.108 verify_dump=1 00:14:11.108 verify_backlog=512 00:14:11.108 verify_state_save=0 00:14:11.108 do_verify=1 00:14:11.108 verify=crc32c-intel 00:14:11.108 [job0] 00:14:11.108 filename=/dev/nvme0n1 00:14:11.108 [job1] 00:14:11.108 filename=/dev/nvme0n2 00:14:11.108 [job2] 00:14:11.108 filename=/dev/nvme0n3 00:14:11.108 [job3] 00:14:11.108 filename=/dev/nvme0n4 00:14:11.108 Could not set queue depth (nvme0n1) 00:14:11.108 Could not set queue depth (nvme0n2) 00:14:11.108 Could not set queue depth (nvme0n3) 00:14:11.109 Could not set queue depth (nvme0n4) 00:14:11.109 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:11.109 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:11.109 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:11.109 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:11.109 fio-3.35 00:14:11.109 Starting 4 threads 00:14:12.484 00:14:12.484 job0: (groupid=0, jobs=1): err= 0: pid=66705: Wed Nov 27 06:09:17 2024 00:14:12.484 read: IOPS=1972, BW=7888KiB/s (8077kB/s)(7896KiB/1001msec) 00:14:12.484 slat (nsec): min=12251, max=35367, avg=14392.35, stdev=2409.19 00:14:12.484 clat (usec): min=209, max=487, avg=265.20, stdev=20.98 00:14:12.484 lat (usec): min=233, max=502, avg=279.59, stdev=21.00 00:14:12.484 clat percentiles (usec): 00:14:12.484 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 249], 00:14:12.484 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:14:12.484 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 302], 00:14:12.484 | 99.00th=[ 322], 99.50th=[ 338], 99.90th=[ 383], 99.95th=[ 490], 00:14:12.484 | 99.99th=[ 490] 00:14:12.484 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:14:12.484 slat (usec): min=18, max=102, avg=23.24, stdev= 5.70 00:14:12.484 clat (usec): min=141, max=783, avg=192.22, stdev=27.96 00:14:12.484 lat (usec): min=163, max=808, avg=215.46, stdev=30.01 00:14:12.484 clat percentiles (usec): 00:14:12.484 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 176], 00:14:12.484 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 194], 00:14:12.484 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 219], 95.00th=[ 229], 00:14:12.484 | 99.00th=[ 251], 99.50th=[ 269], 99.90th=[ 404], 99.95th=[ 725], 00:14:12.484 | 99.99th=[ 783] 00:14:12.484 bw ( KiB/s): min= 8192, max= 8192, per=26.53%, avg=8192.00, stdev= 0.00, samples=1 00:14:12.484 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:12.484 lat (usec) : 250=61.88%, 500=38.07%, 750=0.02%, 1000=0.02% 00:14:12.484 cpu : usr=1.50%, sys=5.90%, ctx=4022, majf=0, minf=7 00:14:12.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:12.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.484 issued rwts: total=1974,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:12.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:12.484 job1: (groupid=0, jobs=1): err= 0: pid=66706: Wed Nov 27 06:09:17 2024 00:14:12.484 read: IOPS=2482, BW=9930KiB/s (10.2MB/s)(9940KiB/1001msec) 00:14:12.484 slat (nsec): min=10799, max=44245, avg=13778.87, stdev=2993.63 00:14:12.484 clat (usec): min=143, max=2519, avg=205.64, stdev=65.59 00:14:12.484 lat (usec): min=155, max=2541, avg=219.42, stdev=65.86 00:14:12.484 clat percentiles (usec): 00:14:12.484 | 1.00th=[ 155], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 180], 00:14:12.484 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 202], 00:14:12.484 | 70.00th=[ 212], 80.00th=[ 225], 90.00th=[ 253], 95.00th=[ 273], 00:14:12.484 | 99.00th=[ 302], 99.50th=[ 318], 99.90th=[ 742], 99.95th=[ 1745], 00:14:12.484 | 99.99th=[ 2507] 00:14:12.484 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:14:12.484 slat (nsec): min=13610, max=91580, avg=21280.39, stdev=4530.66 00:14:12.484 clat (usec): min=95, max=1914, avg=153.05, stdev=45.73 00:14:12.484 lat (usec): min=114, max=1936, avg=174.33, stdev=46.37 00:14:12.484 clat percentiles (usec): 00:14:12.484 | 1.00th=[ 106], 5.00th=[ 114], 10.00th=[ 120], 20.00th=[ 129], 00:14:12.484 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 147], 60.00th=[ 155], 00:14:12.484 | 70.00th=[ 161], 80.00th=[ 174], 90.00th=[ 194], 95.00th=[ 208], 00:14:12.484 | 99.00th=[ 241], 99.50th=[ 273], 99.90th=[ 322], 99.95th=[ 343], 00:14:12.484 | 99.99th=[ 1909] 00:14:12.484 bw ( KiB/s): min=11576, max=11576, per=37.50%, avg=11576.00, stdev= 0.00, samples=1 00:14:12.484 iops : min= 2894, max= 2894, avg=2894.00, stdev= 0.00, samples=1 00:14:12.484 lat (usec) : 100=0.10%, 250=94.33%, 500=5.47%, 750=0.04% 00:14:12.484 lat (msec) : 2=0.04%, 4=0.02% 00:14:12.484 cpu : usr=1.40%, sys=7.70%, ctx=5045, majf=0, minf=13 00:14:12.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:12.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.484 issued rwts: total=2485,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:12.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:12.484 job2: (groupid=0, jobs=1): err= 0: pid=66707: Wed Nov 27 06:09:17 2024 00:14:12.484 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:14:12.484 slat (nsec): min=15138, max=68138, avg=21884.17, stdev=5830.95 00:14:12.484 clat (usec): min=205, max=721, avg=335.17, stdev=61.24 00:14:12.484 lat (usec): min=234, max=744, avg=357.05, stdev=63.76 00:14:12.484 clat percentiles (usec): 00:14:12.484 | 1.00th=[ 258], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 293], 00:14:12.484 | 30.00th=[ 306], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 334], 00:14:12.484 | 70.00th=[ 347], 80.00th=[ 359], 90.00th=[ 375], 95.00th=[ 416], 00:14:12.484 | 99.00th=[ 603], 99.50th=[ 644], 99.90th=[ 693], 99.95th=[ 725], 00:14:12.484 | 99.99th=[ 725] 00:14:12.484 write: IOPS=1580, BW=6322KiB/s (6473kB/s)(6328KiB/1001msec); 0 zone resets 00:14:12.484 slat (usec): min=21, max=110, avg=31.79, stdev= 7.40 00:14:12.484 clat (usec): min=120, max=2812, avg=248.50, stdev=96.31 00:14:12.484 lat (usec): min=142, max=2842, avg=280.30, stdev=96.69 00:14:12.484 clat percentiles (usec): 00:14:12.484 | 1.00th=[ 141], 5.00th=[ 161], 10.00th=[ 190], 20.00th=[ 217], 00:14:12.484 | 30.00th=[ 229], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 255], 00:14:12.484 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 318], 00:14:12.484 | 99.00th=[ 367], 99.50th=[ 392], 99.90th=[ 2409], 99.95th=[ 2802], 00:14:12.484 | 99.99th=[ 2802] 00:14:12.484 bw ( KiB/s): min= 8192, max= 8192, per=26.53%, avg=8192.00, stdev= 0.00, samples=1 00:14:12.484 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:12.484 lat (usec) : 250=27.42%, 500=70.65%, 750=1.83%, 1000=0.03% 00:14:12.484 lat (msec) : 4=0.06% 00:14:12.484 cpu : usr=1.00%, sys=7.50%, ctx=3126, majf=0, minf=7 00:14:12.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:12.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.484 issued rwts: total=1536,1582,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:12.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:12.484 job3: (groupid=0, jobs=1): err= 0: pid=66708: Wed Nov 27 06:09:17 2024 00:14:12.484 read: IOPS=1520, BW=6082KiB/s (6228kB/s)(6088KiB/1001msec) 00:14:12.484 slat (nsec): min=14675, max=56780, avg=20378.24, stdev=4435.43 00:14:12.484 clat (usec): min=207, max=654, avg=331.94, stdev=47.08 00:14:12.484 lat (usec): min=228, max=672, avg=352.32, stdev=48.37 00:14:12.484 clat percentiles (usec): 00:14:12.484 | 1.00th=[ 265], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 297], 00:14:12.484 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 334], 00:14:12.484 | 70.00th=[ 347], 80.00th=[ 359], 90.00th=[ 371], 95.00th=[ 396], 00:14:12.484 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 619], 99.95th=[ 652], 00:14:12.484 | 99.99th=[ 652] 00:14:12.484 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:14:12.484 slat (usec): min=21, max=114, avg=34.12, stdev= 9.53 00:14:12.484 clat (usec): min=116, max=772, avg=262.86, stdev=68.20 00:14:12.484 lat (usec): min=144, max=801, avg=296.98, stdev=73.36 00:14:12.484 clat percentiles (usec): 00:14:12.484 | 1.00th=[ 133], 5.00th=[ 161], 10.00th=[ 196], 20.00th=[ 223], 00:14:12.484 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 265], 00:14:12.484 | 70.00th=[ 273], 80.00th=[ 289], 90.00th=[ 347], 95.00th=[ 424], 00:14:12.484 | 99.00th=[ 474], 99.50th=[ 486], 99.90th=[ 668], 99.95th=[ 775], 00:14:12.484 | 99.99th=[ 775] 00:14:12.484 bw ( KiB/s): min= 8192, max= 8192, per=26.53%, avg=8192.00, stdev= 0.00, samples=1 00:14:12.484 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:12.484 lat (usec) : 250=23.15%, 500=75.67%, 750=1.14%, 1000=0.03% 00:14:12.484 cpu : usr=1.90%, sys=6.50%, ctx=3059, majf=0, minf=11 00:14:12.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:12.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.484 issued rwts: total=1522,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:12.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:12.484 00:14:12.484 Run status group 0 (all jobs): 00:14:12.484 READ: bw=29.3MiB/s (30.8MB/s), 6082KiB/s-9930KiB/s (6228kB/s-10.2MB/s), io=29.4MiB (30.8MB), run=1001-1001msec 00:14:12.484 WRITE: bw=30.1MiB/s (31.6MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=30.2MiB (31.6MB), run=1001-1001msec 00:14:12.484 00:14:12.484 Disk stats (read/write): 00:14:12.484 nvme0n1: ios=1586/1949, merge=0/0, ticks=465/394, in_queue=859, util=88.38% 00:14:12.484 nvme0n2: ios=2096/2278, merge=0/0, ticks=476/371, in_queue=847, util=89.25% 00:14:12.484 nvme0n3: ios=1141/1536, merge=0/0, ticks=406/398, in_queue=804, util=88.99% 00:14:12.484 nvme0n4: ios=1101/1536, merge=0/0, ticks=379/421, in_queue=800, util=89.74% 00:14:12.485 06:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:12.485 [global] 00:14:12.485 thread=1 00:14:12.485 invalidate=1 00:14:12.485 rw=randwrite 00:14:12.485 time_based=1 00:14:12.485 runtime=1 00:14:12.485 ioengine=libaio 00:14:12.485 direct=1 00:14:12.485 bs=4096 00:14:12.485 iodepth=1 00:14:12.485 norandommap=0 00:14:12.485 numjobs=1 00:14:12.485 00:14:12.485 verify_dump=1 00:14:12.485 verify_backlog=512 00:14:12.485 verify_state_save=0 00:14:12.485 do_verify=1 00:14:12.485 verify=crc32c-intel 00:14:12.485 [job0] 00:14:12.485 filename=/dev/nvme0n1 00:14:12.485 [job1] 00:14:12.485 filename=/dev/nvme0n2 00:14:12.485 [job2] 00:14:12.485 filename=/dev/nvme0n3 00:14:12.485 [job3] 00:14:12.485 filename=/dev/nvme0n4 00:14:12.485 Could not set queue depth (nvme0n1) 00:14:12.485 Could not set queue depth (nvme0n2) 00:14:12.485 Could not set queue depth (nvme0n3) 00:14:12.485 Could not set queue depth (nvme0n4) 00:14:12.485 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:12.485 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:12.485 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:12.485 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:12.485 fio-3.35 00:14:12.485 Starting 4 threads 00:14:13.859 00:14:13.859 job0: (groupid=0, jobs=1): err= 0: pid=66767: Wed Nov 27 06:09:18 2024 00:14:13.859 read: IOPS=1946, BW=7784KiB/s (7971kB/s)(7792KiB/1001msec) 00:14:13.859 slat (nsec): min=13660, max=52901, avg=17250.67, stdev=4185.75 00:14:13.859 clat (usec): min=166, max=1175, avg=263.00, stdev=59.92 00:14:13.859 lat (usec): min=181, max=1189, avg=280.26, stdev=60.71 00:14:13.859 clat percentiles (usec): 00:14:13.859 | 1.00th=[ 174], 5.00th=[ 186], 10.00th=[ 194], 20.00th=[ 208], 00:14:13.859 | 30.00th=[ 223], 40.00th=[ 235], 50.00th=[ 247], 60.00th=[ 277], 00:14:13.859 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 338], 95.00th=[ 351], 00:14:13.859 | 99.00th=[ 379], 99.50th=[ 392], 99.90th=[ 437], 99.95th=[ 1172], 00:14:13.859 | 99.99th=[ 1172] 00:14:13.859 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:14:13.859 slat (usec): min=21, max=113, avg=26.58, stdev= 6.28 00:14:13.859 clat (usec): min=107, max=434, avg=190.95, stdev=39.03 00:14:13.859 lat (usec): min=130, max=512, avg=217.54, stdev=40.26 00:14:13.859 clat percentiles (usec): 00:14:13.859 | 1.00th=[ 123], 5.00th=[ 135], 10.00th=[ 143], 20.00th=[ 155], 00:14:13.859 | 30.00th=[ 165], 40.00th=[ 176], 50.00th=[ 186], 60.00th=[ 198], 00:14:13.859 | 70.00th=[ 215], 80.00th=[ 229], 90.00th=[ 245], 95.00th=[ 258], 00:14:13.859 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 326], 99.95th=[ 359], 00:14:13.859 | 99.99th=[ 437] 00:14:13.859 bw ( KiB/s): min= 8192, max= 8192, per=28.60%, avg=8192.00, stdev= 0.00, samples=1 00:14:13.859 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:13.859 lat (usec) : 250=72.77%, 500=27.20% 00:14:13.859 lat (msec) : 2=0.03% 00:14:13.859 cpu : usr=2.60%, sys=6.40%, ctx=3996, majf=0, minf=11 00:14:13.859 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.859 issued rwts: total=1948,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.860 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.860 job1: (groupid=0, jobs=1): err= 0: pid=66768: Wed Nov 27 06:09:18 2024 00:14:13.860 read: IOPS=1954, BW=7816KiB/s (8004kB/s)(7824KiB/1001msec) 00:14:13.860 slat (nsec): min=13943, max=94659, avg=17482.25, stdev=4140.31 00:14:13.860 clat (usec): min=164, max=541, avg=265.85, stdev=56.68 00:14:13.860 lat (usec): min=178, max=558, avg=283.34, stdev=57.50 00:14:13.860 clat percentiles (usec): 00:14:13.860 | 1.00th=[ 174], 5.00th=[ 186], 10.00th=[ 194], 20.00th=[ 208], 00:14:13.860 | 30.00th=[ 225], 40.00th=[ 239], 50.00th=[ 262], 60.00th=[ 289], 00:14:13.860 | 70.00th=[ 310], 80.00th=[ 326], 90.00th=[ 338], 95.00th=[ 351], 00:14:13.860 | 99.00th=[ 375], 99.50th=[ 392], 99.90th=[ 437], 99.95th=[ 545], 00:14:13.860 | 99.99th=[ 545] 00:14:13.860 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:14:13.860 slat (nsec): min=20300, max=80951, avg=25702.19, stdev=5540.79 00:14:13.860 clat (usec): min=99, max=751, avg=188.05, stdev=47.56 00:14:13.860 lat (usec): min=121, max=776, avg=213.75, stdev=48.45 00:14:13.860 clat percentiles (usec): 00:14:13.860 | 1.00th=[ 117], 5.00th=[ 128], 10.00th=[ 137], 20.00th=[ 147], 00:14:13.860 | 30.00th=[ 155], 40.00th=[ 165], 50.00th=[ 180], 60.00th=[ 196], 00:14:13.860 | 70.00th=[ 217], 80.00th=[ 231], 90.00th=[ 249], 95.00th=[ 260], 00:14:13.860 | 99.00th=[ 302], 99.50th=[ 326], 99.90th=[ 506], 99.95th=[ 635], 00:14:13.860 | 99.99th=[ 750] 00:14:13.860 bw ( KiB/s): min= 8192, max= 8192, per=28.60%, avg=8192.00, stdev= 0.00, samples=1 00:14:13.860 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:13.860 lat (usec) : 100=0.02%, 250=68.41%, 500=31.47%, 750=0.07%, 1000=0.02% 00:14:13.860 cpu : usr=2.40%, sys=6.10%, ctx=4006, majf=0, minf=15 00:14:13.860 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.860 issued rwts: total=1956,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.860 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.860 job2: (groupid=0, jobs=1): err= 0: pid=66769: Wed Nov 27 06:09:18 2024 00:14:13.860 read: IOPS=1395, BW=5582KiB/s (5716kB/s)(5588KiB/1001msec) 00:14:13.860 slat (nsec): min=9893, max=57276, avg=15655.03, stdev=5060.57 00:14:13.860 clat (usec): min=208, max=765, avg=353.86, stdev=67.29 00:14:13.860 lat (usec): min=230, max=795, avg=369.51, stdev=68.30 00:14:13.860 clat percentiles (usec): 00:14:13.860 | 1.00th=[ 258], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 310], 00:14:13.860 | 30.00th=[ 318], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 351], 00:14:13.860 | 70.00th=[ 367], 80.00th=[ 383], 90.00th=[ 420], 95.00th=[ 482], 00:14:13.860 | 99.00th=[ 644], 99.50th=[ 693], 99.90th=[ 742], 99.95th=[ 766], 00:14:13.860 | 99.99th=[ 766] 00:14:13.860 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:14:13.860 slat (nsec): min=12697, max=84445, avg=27692.86, stdev=7990.38 00:14:13.860 clat (usec): min=126, max=3004, avg=283.22, stdev=111.25 00:14:13.860 lat (usec): min=152, max=3033, avg=310.91, stdev=110.92 00:14:13.860 clat percentiles (usec): 00:14:13.860 | 1.00th=[ 151], 5.00th=[ 182], 10.00th=[ 206], 20.00th=[ 231], 00:14:13.860 | 30.00th=[ 247], 40.00th=[ 265], 50.00th=[ 277], 60.00th=[ 293], 00:14:13.860 | 70.00th=[ 306], 80.00th=[ 326], 90.00th=[ 347], 95.00th=[ 379], 00:14:13.860 | 99.00th=[ 515], 99.50th=[ 553], 99.90th=[ 2507], 99.95th=[ 2999], 00:14:13.860 | 99.99th=[ 2999] 00:14:13.860 bw ( KiB/s): min= 7960, max= 7960, per=27.79%, avg=7960.00, stdev= 0.00, samples=1 00:14:13.860 iops : min= 1990, max= 1990, avg=1990.00, stdev= 0.00, samples=1 00:14:13.860 lat (usec) : 250=17.08%, 500=80.40%, 750=2.39%, 1000=0.07% 00:14:13.860 lat (msec) : 4=0.07% 00:14:13.860 cpu : usr=1.70%, sys=5.20%, ctx=2949, majf=0, minf=7 00:14:13.860 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.860 issued rwts: total=1397,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.860 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.860 job3: (groupid=0, jobs=1): err= 0: pid=66770: Wed Nov 27 06:09:18 2024 00:14:13.860 read: IOPS=1309, BW=5239KiB/s (5364kB/s)(5244KiB/1001msec) 00:14:13.860 slat (nsec): min=8366, max=46380, avg=16216.87, stdev=4104.13 00:14:13.860 clat (usec): min=240, max=2534, avg=346.13, stdev=81.96 00:14:13.860 lat (usec): min=256, max=2549, avg=362.35, stdev=81.76 00:14:13.860 clat percentiles (usec): 00:14:13.860 | 1.00th=[ 273], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 306], 00:14:13.860 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 347], 00:14:13.860 | 70.00th=[ 363], 80.00th=[ 375], 90.00th=[ 404], 95.00th=[ 424], 00:14:13.860 | 99.00th=[ 515], 99.50th=[ 537], 99.90th=[ 1401], 99.95th=[ 2540], 00:14:13.860 | 99.99th=[ 2540] 00:14:13.860 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:14:13.860 slat (nsec): min=11877, max=93450, avg=28351.43, stdev=10654.37 00:14:13.860 clat (usec): min=112, max=7944, avg=309.32, stdev=292.52 00:14:13.860 lat (usec): min=140, max=7997, avg=337.67, stdev=294.51 00:14:13.860 clat percentiles (usec): 00:14:13.860 | 1.00th=[ 155], 5.00th=[ 200], 10.00th=[ 223], 20.00th=[ 241], 00:14:13.860 | 30.00th=[ 258], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 297], 00:14:13.860 | 70.00th=[ 314], 80.00th=[ 330], 90.00th=[ 375], 95.00th=[ 465], 00:14:13.860 | 99.00th=[ 578], 99.50th=[ 619], 99.90th=[ 6456], 99.95th=[ 7963], 00:14:13.860 | 99.99th=[ 7963] 00:14:13.860 bw ( KiB/s): min= 7248, max= 7248, per=25.30%, avg=7248.00, stdev= 0.00, samples=1 00:14:13.860 iops : min= 1812, max= 1812, avg=1812.00, stdev= 0.00, samples=1 00:14:13.860 lat (usec) : 250=13.63%, 500=83.91%, 750=2.25% 00:14:13.860 lat (msec) : 2=0.04%, 4=0.07%, 10=0.11% 00:14:13.860 cpu : usr=2.00%, sys=5.10%, ctx=2854, majf=0, minf=13 00:14:13.860 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.860 issued rwts: total=1311,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.860 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.860 00:14:13.860 Run status group 0 (all jobs): 00:14:13.860 READ: bw=25.8MiB/s (27.1MB/s), 5239KiB/s-7816KiB/s (5364kB/s-8004kB/s), io=25.8MiB (27.1MB), run=1001-1001msec 00:14:13.860 WRITE: bw=28.0MiB/s (29.3MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:14:13.860 00:14:13.860 Disk stats (read/write): 00:14:13.860 nvme0n1: ios=1586/1782, merge=0/0, ticks=471/368, in_queue=839, util=88.28% 00:14:13.860 nvme0n2: ios=1585/1776, merge=0/0, ticks=476/354, in_queue=830, util=88.60% 00:14:13.860 nvme0n3: ios=1024/1503, merge=0/0, ticks=360/414, in_queue=774, util=88.57% 00:14:13.860 nvme0n4: ios=1024/1414, merge=0/0, ticks=344/418, in_queue=762, util=88.85% 00:14:13.860 06:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:13.860 [global] 00:14:13.860 thread=1 00:14:13.860 invalidate=1 00:14:13.860 rw=write 00:14:13.860 time_based=1 00:14:13.860 runtime=1 00:14:13.860 ioengine=libaio 00:14:13.860 direct=1 00:14:13.860 bs=4096 00:14:13.860 iodepth=128 00:14:13.860 norandommap=0 00:14:13.860 numjobs=1 00:14:13.860 00:14:13.860 verify_dump=1 00:14:13.860 verify_backlog=512 00:14:13.860 verify_state_save=0 00:14:13.860 do_verify=1 00:14:13.860 verify=crc32c-intel 00:14:13.860 [job0] 00:14:13.860 filename=/dev/nvme0n1 00:14:13.860 [job1] 00:14:13.860 filename=/dev/nvme0n2 00:14:13.860 [job2] 00:14:13.860 filename=/dev/nvme0n3 00:14:13.860 [job3] 00:14:13.860 filename=/dev/nvme0n4 00:14:13.861 Could not set queue depth (nvme0n1) 00:14:13.861 Could not set queue depth (nvme0n2) 00:14:13.861 Could not set queue depth (nvme0n3) 00:14:13.861 Could not set queue depth (nvme0n4) 00:14:13.861 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:13.861 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:13.861 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:13.861 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:13.861 fio-3.35 00:14:13.861 Starting 4 threads 00:14:15.237 00:14:15.237 job0: (groupid=0, jobs=1): err= 0: pid=66829: Wed Nov 27 06:09:20 2024 00:14:15.237 read: IOPS=2744, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1002msec) 00:14:15.237 slat (usec): min=4, max=5714, avg=170.16, stdev=871.73 00:14:15.237 clat (usec): min=1752, max=24689, avg=21692.95, stdev=2905.37 00:14:15.237 lat (usec): min=1766, max=24699, avg=21863.11, stdev=2784.44 00:14:15.237 clat percentiles (usec): 00:14:15.237 | 1.00th=[ 2343], 5.00th=[17433], 10.00th=[21365], 20.00th=[21627], 00:14:15.237 | 30.00th=[21890], 40.00th=[22152], 50.00th=[22414], 60.00th=[22414], 00:14:15.237 | 70.00th=[22676], 80.00th=[22938], 90.00th=[22938], 95.00th=[23200], 00:14:15.237 | 99.00th=[23462], 99.50th=[23462], 99.90th=[23462], 99.95th=[23462], 00:14:15.237 | 99.99th=[24773] 00:14:15.237 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:14:15.237 slat (usec): min=11, max=7942, avg=166.53, stdev=821.62 00:14:15.237 clat (usec): min=15958, max=25909, avg=21678.99, stdev=1317.31 00:14:15.237 lat (usec): min=16036, max=25928, avg=21845.52, stdev=1035.73 00:14:15.237 clat percentiles (usec): 00:14:15.237 | 1.00th=[16581], 5.00th=[20579], 10.00th=[20841], 20.00th=[21103], 00:14:15.237 | 30.00th=[21365], 40.00th=[21365], 50.00th=[21365], 60.00th=[21627], 00:14:15.237 | 70.00th=[21890], 80.00th=[22414], 90.00th=[22938], 95.00th=[23462], 00:14:15.237 | 99.00th=[25822], 99.50th=[25822], 99.90th=[25822], 99.95th=[25822], 00:14:15.237 | 99.99th=[25822] 00:14:15.237 bw ( KiB/s): min=12288, max=12288, per=25.13%, avg=12288.00, stdev= 0.00, samples=2 00:14:15.237 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:14:15.237 lat (msec) : 2=0.21%, 4=0.31%, 10=0.55%, 20=4.11%, 50=94.83% 00:14:15.237 cpu : usr=2.50%, sys=7.79%, ctx=193, majf=0, minf=9 00:14:15.237 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:14:15.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:15.237 issued rwts: total=2750,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.237 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:15.237 job1: (groupid=0, jobs=1): err= 0: pid=66830: Wed Nov 27 06:09:20 2024 00:14:15.237 read: IOPS=2074, BW=8299KiB/s (8498kB/s)(8324KiB/1003msec) 00:14:15.237 slat (usec): min=4, max=13966, avg=251.23, stdev=1545.46 00:14:15.237 clat (usec): min=961, max=55820, avg=31608.36, stdev=11119.82 00:14:15.237 lat (usec): min=4923, max=55840, avg=31859.59, stdev=11117.30 00:14:15.237 clat percentiles (usec): 00:14:15.237 | 1.00th=[ 5145], 5.00th=[21103], 10.00th=[22152], 20.00th=[22938], 00:14:15.237 | 30.00th=[23725], 40.00th=[24511], 50.00th=[29754], 60.00th=[31851], 00:14:15.237 | 70.00th=[33424], 80.00th=[42730], 90.00th=[53740], 95.00th=[55313], 00:14:15.237 | 99.00th=[55837], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], 00:14:15.237 | 99.99th=[55837] 00:14:15.237 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:14:15.237 slat (usec): min=8, max=12929, avg=179.47, stdev=1015.95 00:14:15.237 clat (usec): min=7846, max=45551, avg=23474.84, stdev=8618.15 00:14:15.237 lat (usec): min=7875, max=45572, avg=23654.31, stdev=8597.46 00:14:15.237 clat percentiles (usec): 00:14:15.237 | 1.00th=[ 8160], 5.00th=[11338], 10.00th=[12911], 20.00th=[15926], 00:14:15.237 | 30.00th=[17695], 40.00th=[20317], 50.00th=[21103], 60.00th=[22938], 00:14:15.237 | 70.00th=[29230], 80.00th=[32375], 90.00th=[34341], 95.00th=[39060], 00:14:15.237 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:14:15.237 | 99.99th=[45351] 00:14:15.237 bw ( KiB/s): min= 9224, max=10504, per=20.17%, avg=9864.00, stdev=905.10, samples=2 00:14:15.237 iops : min= 2306, max= 2626, avg=2466.00, stdev=226.27, samples=2 00:14:15.237 lat (usec) : 1000=0.02% 00:14:15.237 lat (msec) : 10=1.51%, 20=18.14%, 50=74.98%, 100=5.34% 00:14:15.237 cpu : usr=2.40%, sys=6.39%, ctx=146, majf=0, minf=13 00:14:15.237 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:14:15.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:15.237 issued rwts: total=2081,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.237 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:15.237 job2: (groupid=0, jobs=1): err= 0: pid=66831: Wed Nov 27 06:09:20 2024 00:14:15.237 read: IOPS=3328, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1002msec) 00:14:15.237 slat (usec): min=9, max=18206, avg=161.00, stdev=1087.02 00:14:15.237 clat (usec): min=1681, max=38610, avg=22227.41, stdev=3607.25 00:14:15.237 lat (usec): min=5207, max=40044, avg=22388.41, stdev=3650.26 00:14:15.237 clat percentiles (usec): 00:14:15.237 | 1.00th=[13435], 5.00th=[19006], 10.00th=[19268], 20.00th=[19792], 00:14:15.237 | 30.00th=[20055], 40.00th=[20317], 50.00th=[20841], 60.00th=[22414], 00:14:15.237 | 70.00th=[23987], 80.00th=[26870], 90.00th=[27395], 95.00th=[27919], 00:14:15.237 | 99.00th=[28705], 99.50th=[29230], 99.90th=[36439], 99.95th=[37487], 00:14:15.237 | 99.99th=[38536] 00:14:15.237 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:14:15.237 slat (usec): min=6, max=13304, avg=121.67, stdev=762.52 00:14:15.237 clat (usec): min=6632, max=27355, avg=14758.30, stdev=2175.72 00:14:15.237 lat (usec): min=10455, max=27396, avg=14879.97, stdev=2083.41 00:14:15.237 clat percentiles (usec): 00:14:15.237 | 1.00th=[ 9765], 5.00th=[12387], 10.00th=[12649], 20.00th=[13173], 00:14:15.237 | 30.00th=[13435], 40.00th=[14091], 50.00th=[14353], 60.00th=[14877], 00:14:15.237 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16909], 95.00th=[17695], 00:14:15.237 | 99.00th=[22938], 99.50th=[23462], 99.90th=[23725], 99.95th=[26346], 00:14:15.237 | 99.99th=[27395] 00:14:15.237 bw ( KiB/s): min=13320, max=15352, per=29.31%, avg=14336.00, stdev=1436.84, samples=2 00:14:15.237 iops : min= 3330, max= 3838, avg=3584.00, stdev=359.21, samples=2 00:14:15.237 lat (msec) : 2=0.01%, 10=0.74%, 20=64.20%, 50=35.05% 00:14:15.237 cpu : usr=2.50%, sys=10.29%, ctx=150, majf=0, minf=3 00:14:15.237 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:14:15.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:15.237 issued rwts: total=3335,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.237 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:15.237 job3: (groupid=0, jobs=1): err= 0: pid=66832: Wed Nov 27 06:09:20 2024 00:14:15.237 read: IOPS=2707, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1005msec) 00:14:15.237 slat (usec): min=5, max=5717, avg=172.17, stdev=875.86 00:14:15.237 clat (usec): min=2028, max=24893, avg=21975.37, stdev=2148.96 00:14:15.237 lat (usec): min=7470, max=24910, avg=22147.54, stdev=1961.54 00:14:15.237 clat percentiles (usec): 00:14:15.237 | 1.00th=[ 7963], 5.00th=[18220], 10.00th=[21365], 20.00th=[21890], 00:14:15.237 | 30.00th=[21890], 40.00th=[22152], 50.00th=[22414], 60.00th=[22676], 00:14:15.237 | 70.00th=[22676], 80.00th=[22938], 90.00th=[23200], 95.00th=[23200], 00:14:15.238 | 99.00th=[24511], 99.50th=[24773], 99.90th=[24773], 99.95th=[24773], 00:14:15.238 | 99.99th=[24773] 00:14:15.238 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:14:15.238 slat (usec): min=11, max=8151, avg=166.87, stdev=822.24 00:14:15.238 clat (usec): min=15768, max=25887, avg=21683.64, stdev=1301.76 00:14:15.238 lat (usec): min=17060, max=25904, avg=21850.51, stdev=1015.81 00:14:15.238 clat percentiles (usec): 00:14:15.238 | 1.00th=[16712], 5.00th=[20579], 10.00th=[20841], 20.00th=[21103], 00:14:15.238 | 30.00th=[21365], 40.00th=[21365], 50.00th=[21627], 60.00th=[21627], 00:14:15.238 | 70.00th=[21890], 80.00th=[22152], 90.00th=[22938], 95.00th=[23725], 00:14:15.238 | 99.00th=[25822], 99.50th=[25822], 99.90th=[25822], 99.95th=[25822], 00:14:15.238 | 99.99th=[25822] 00:14:15.238 bw ( KiB/s): min=12288, max=12312, per=25.15%, avg=12300.00, stdev=16.97, samples=2 00:14:15.238 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:14:15.238 lat (msec) : 4=0.02%, 10=0.55%, 20=4.13%, 50=95.30% 00:14:15.238 cpu : usr=3.29%, sys=7.07%, ctx=206, majf=0, minf=2 00:14:15.238 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:14:15.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:15.238 issued rwts: total=2721,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.238 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:15.238 00:14:15.238 Run status group 0 (all jobs): 00:14:15.238 READ: bw=42.3MiB/s (44.4MB/s), 8299KiB/s-13.0MiB/s (8498kB/s-13.6MB/s), io=42.5MiB (44.6MB), run=1002-1005msec 00:14:15.238 WRITE: bw=47.8MiB/s (50.1MB/s), 9.97MiB/s-14.0MiB/s (10.5MB/s-14.7MB/s), io=48.0MiB (50.3MB), run=1002-1005msec 00:14:15.238 00:14:15.238 Disk stats (read/write): 00:14:15.238 nvme0n1: ios=2482/2560, merge=0/0, ticks=12064/11320, in_queue=23384, util=89.08% 00:14:15.238 nvme0n2: ios=1937/2048, merge=0/0, ticks=15775/10052, in_queue=25827, util=88.57% 00:14:15.238 nvme0n3: ios=2812/3072, merge=0/0, ticks=60488/41635, in_queue=102123, util=89.11% 00:14:15.238 nvme0n4: ios=2432/2560, merge=0/0, ticks=12116/11691, in_queue=23807, util=89.67% 00:14:15.238 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:15.238 [global] 00:14:15.238 thread=1 00:14:15.238 invalidate=1 00:14:15.238 rw=randwrite 00:14:15.238 time_based=1 00:14:15.238 runtime=1 00:14:15.238 ioengine=libaio 00:14:15.238 direct=1 00:14:15.238 bs=4096 00:14:15.238 iodepth=128 00:14:15.238 norandommap=0 00:14:15.238 numjobs=1 00:14:15.238 00:14:15.238 verify_dump=1 00:14:15.238 verify_backlog=512 00:14:15.238 verify_state_save=0 00:14:15.238 do_verify=1 00:14:15.238 verify=crc32c-intel 00:14:15.238 [job0] 00:14:15.238 filename=/dev/nvme0n1 00:14:15.238 [job1] 00:14:15.238 filename=/dev/nvme0n2 00:14:15.238 [job2] 00:14:15.238 filename=/dev/nvme0n3 00:14:15.238 [job3] 00:14:15.238 filename=/dev/nvme0n4 00:14:15.238 Could not set queue depth (nvme0n1) 00:14:15.238 Could not set queue depth (nvme0n2) 00:14:15.238 Could not set queue depth (nvme0n3) 00:14:15.238 Could not set queue depth (nvme0n4) 00:14:15.238 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:15.238 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:15.238 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:15.238 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:15.238 fio-3.35 00:14:15.238 Starting 4 threads 00:14:16.615 00:14:16.615 job0: (groupid=0, jobs=1): err= 0: pid=66886: Wed Nov 27 06:09:21 2024 00:14:16.615 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:14:16.615 slat (usec): min=6, max=8331, avg=139.09, stdev=631.24 00:14:16.615 clat (usec): min=9202, max=26054, avg=17977.53, stdev=2639.66 00:14:16.615 lat (usec): min=9222, max=26084, avg=18116.61, stdev=2645.05 00:14:16.615 clat percentiles (usec): 00:14:16.615 | 1.00th=[11338], 5.00th=[12780], 10.00th=[14353], 20.00th=[15795], 00:14:16.615 | 30.00th=[16712], 40.00th=[17433], 50.00th=[18482], 60.00th=[19268], 00:14:16.615 | 70.00th=[19530], 80.00th=[20317], 90.00th=[20841], 95.00th=[21103], 00:14:16.615 | 99.00th=[23987], 99.50th=[24249], 99.90th=[25297], 99.95th=[25297], 00:14:16.615 | 99.99th=[26084] 00:14:16.615 write: IOPS=3871, BW=15.1MiB/s (15.9MB/s)(15.2MiB/1003msec); 0 zone resets 00:14:16.615 slat (usec): min=10, max=8605, avg=121.44, stdev=774.81 00:14:16.615 clat (usec): min=2559, max=28424, avg=16049.77, stdev=3106.05 00:14:16.615 lat (usec): min=2588, max=28475, avg=16171.21, stdev=3198.79 00:14:16.615 clat percentiles (usec): 00:14:16.615 | 1.00th=[ 7570], 5.00th=[10290], 10.00th=[13042], 20.00th=[14091], 00:14:16.615 | 30.00th=[14615], 40.00th=[15008], 50.00th=[15926], 60.00th=[17433], 00:14:16.615 | 70.00th=[17957], 80.00th=[18482], 90.00th=[19530], 95.00th=[20055], 00:14:16.615 | 99.00th=[22676], 99.50th=[23987], 99.90th=[27132], 99.95th=[27657], 00:14:16.615 | 99.99th=[28443] 00:14:16.615 bw ( KiB/s): min=14616, max=15432, per=35.64%, avg=15024.00, stdev=577.00, samples=2 00:14:16.615 iops : min= 3654, max= 3858, avg=3756.00, stdev=144.25, samples=2 00:14:16.615 lat (msec) : 4=0.47%, 10=1.89%, 20=83.05%, 50=14.60% 00:14:16.615 cpu : usr=3.19%, sys=10.68%, ctx=252, majf=0, minf=8 00:14:16.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:16.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:16.615 issued rwts: total=3584,3883,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.615 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:16.615 job1: (groupid=0, jobs=1): err= 0: pid=66887: Wed Nov 27 06:09:21 2024 00:14:16.615 read: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(10.0MiB/1013msec) 00:14:16.615 slat (usec): min=8, max=14483, avg=164.67, stdev=900.20 00:14:16.615 clat (usec): min=10440, max=54063, avg=21071.92, stdev=9195.41 00:14:16.615 lat (usec): min=10462, max=56718, avg=21236.59, stdev=9293.07 00:14:16.615 clat percentiles (usec): 00:14:16.615 | 1.00th=[12125], 5.00th=[14615], 10.00th=[14877], 20.00th=[15533], 00:14:16.615 | 30.00th=[16188], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:14:16.615 | 70.00th=[18220], 80.00th=[22938], 90.00th=[41157], 95.00th=[41681], 00:14:16.615 | 99.00th=[43254], 99.50th=[43254], 99.90th=[52691], 99.95th=[53216], 00:14:16.615 | 99.99th=[54264] 00:14:16.615 write: IOPS=2893, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1013msec); 0 zone resets 00:14:16.615 slat (usec): min=8, max=24383, avg=189.75, stdev=1099.84 00:14:16.615 clat (msec): min=8, max=141, avg=25.34, stdev=25.30 00:14:16.615 lat (msec): min=8, max=141, avg=25.53, stdev=25.47 00:14:16.615 clat percentiles (msec): 00:14:16.615 | 1.00th=[ 9], 5.00th=[ 13], 10.00th=[ 15], 20.00th=[ 16], 00:14:16.615 | 30.00th=[ 16], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 18], 00:14:16.615 | 70.00th=[ 19], 80.00th=[ 22], 90.00th=[ 52], 95.00th=[ 94], 00:14:16.615 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 142], 99.95th=[ 142], 00:14:16.615 | 99.99th=[ 142] 00:14:16.615 bw ( KiB/s): min= 6880, max=15575, per=26.63%, avg=11227.50, stdev=6148.29, samples=2 00:14:16.615 iops : min= 1720, max= 3893, avg=2806.50, stdev=1536.54, samples=2 00:14:16.615 lat (msec) : 10=1.88%, 20=75.51%, 50=17.08%, 100=3.31%, 250=2.22% 00:14:16.615 cpu : usr=2.67%, sys=8.00%, ctx=298, majf=0, minf=9 00:14:16.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:14:16.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:16.616 issued rwts: total=2560,2931,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.616 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:16.616 job2: (groupid=0, jobs=1): err= 0: pid=66888: Wed Nov 27 06:09:21 2024 00:14:16.616 read: IOPS=1516, BW=6065KiB/s (6211kB/s)(6144KiB/1013msec) 00:14:16.616 slat (usec): min=10, max=35086, avg=288.43, stdev=1911.33 00:14:16.616 clat (usec): min=20074, max=67721, avg=38955.27, stdev=6481.21 00:14:16.616 lat (usec): min=20089, max=67759, avg=39243.69, stdev=6522.46 00:14:16.616 clat percentiles (usec): 00:14:16.616 | 1.00th=[23200], 5.00th=[32113], 10.00th=[33424], 20.00th=[34866], 00:14:16.616 | 30.00th=[35390], 40.00th=[35390], 50.00th=[36439], 60.00th=[37487], 00:14:16.616 | 70.00th=[41157], 80.00th=[42730], 90.00th=[48497], 95.00th=[51643], 00:14:16.616 | 99.00th=[52691], 99.50th=[63177], 99.90th=[66847], 99.95th=[67634], 00:14:16.616 | 99.99th=[67634] 00:14:16.616 write: IOPS=1791, BW=7167KiB/s (7339kB/s)(7260KiB/1013msec); 0 zone resets 00:14:16.616 slat (usec): min=8, max=35145, avg=299.52, stdev=1728.65 00:14:16.616 clat (msec): min=10, max=143, avg=37.73, stdev=27.34 00:14:16.616 lat (msec): min=13, max=143, avg=38.03, stdev=27.48 00:14:16.616 clat percentiles (msec): 00:14:16.616 | 1.00th=[ 17], 5.00th=[ 20], 10.00th=[ 21], 20.00th=[ 22], 00:14:16.616 | 30.00th=[ 25], 40.00th=[ 29], 50.00th=[ 31], 60.00th=[ 32], 00:14:16.616 | 70.00th=[ 33], 80.00th=[ 38], 90.00th=[ 73], 95.00th=[ 123], 00:14:16.616 | 99.00th=[ 134], 99.50th=[ 138], 99.90th=[ 140], 99.95th=[ 144], 00:14:16.616 | 99.99th=[ 144] 00:14:16.616 bw ( KiB/s): min= 5304, max= 8208, per=16.02%, avg=6756.00, stdev=2053.44, samples=2 00:14:16.616 iops : min= 1326, max= 2052, avg=1689.00, stdev=513.36, samples=2 00:14:16.616 lat (msec) : 20=4.89%, 50=82.21%, 100=9.10%, 250=3.79% 00:14:16.616 cpu : usr=1.58%, sys=5.04%, ctx=174, majf=0, minf=5 00:14:16.616 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:14:16.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:16.616 issued rwts: total=1536,1815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.616 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:16.616 job3: (groupid=0, jobs=1): err= 0: pid=66889: Wed Nov 27 06:09:21 2024 00:14:16.616 read: IOPS=1838, BW=7355KiB/s (7531kB/s)(7384KiB/1004msec) 00:14:16.616 slat (usec): min=10, max=16229, avg=227.19, stdev=1279.54 00:14:16.616 clat (usec): min=2859, max=93674, avg=30312.66, stdev=13398.17 00:14:16.616 lat (usec): min=2871, max=93700, avg=30539.84, stdev=13516.87 00:14:16.616 clat percentiles (usec): 00:14:16.616 | 1.00th=[ 5080], 5.00th=[13042], 10.00th=[15139], 20.00th=[21103], 00:14:16.616 | 30.00th=[21103], 40.00th=[23987], 50.00th=[33817], 60.00th=[35390], 00:14:16.616 | 70.00th=[35390], 80.00th=[36439], 90.00th=[43254], 95.00th=[50070], 00:14:16.616 | 99.00th=[81265], 99.50th=[85459], 99.90th=[92799], 99.95th=[93848], 00:14:16.616 | 99.99th=[93848] 00:14:16.616 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:14:16.616 slat (usec): min=10, max=24217, avg=275.49, stdev=1579.74 00:14:16.616 clat (usec): min=10835, max=85908, avg=33191.82, stdev=17531.36 00:14:16.616 lat (usec): min=10876, max=87062, avg=33467.30, stdev=17668.45 00:14:16.616 clat percentiles (usec): 00:14:16.616 | 1.00th=[11076], 5.00th=[11600], 10.00th=[11994], 20.00th=[16188], 00:14:16.616 | 30.00th=[19268], 40.00th=[29754], 50.00th=[31327], 60.00th=[32900], 00:14:16.616 | 70.00th=[39060], 80.00th=[50594], 90.00th=[57410], 95.00th=[67634], 00:14:16.616 | 99.00th=[83362], 99.50th=[84411], 99.90th=[85459], 99.95th=[85459], 00:14:16.616 | 99.99th=[85459] 00:14:16.616 bw ( KiB/s): min= 8175, max= 8192, per=19.41%, avg=8183.50, stdev=12.02, samples=2 00:14:16.616 iops : min= 2043, max= 2048, avg=2045.50, stdev= 3.54, samples=2 00:14:16.616 lat (msec) : 4=0.39%, 10=1.10%, 20=23.52%, 50=61.50%, 100=13.48% 00:14:16.616 cpu : usr=2.29%, sys=5.58%, ctx=161, majf=0, minf=19 00:14:16.616 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:14:16.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:16.616 issued rwts: total=1846,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.616 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:16.616 00:14:16.616 Run status group 0 (all jobs): 00:14:16.616 READ: bw=36.7MiB/s (38.5MB/s), 6065KiB/s-14.0MiB/s (6211kB/s-14.6MB/s), io=37.2MiB (39.0MB), run=1003-1013msec 00:14:16.616 WRITE: bw=41.2MiB/s (43.2MB/s), 7167KiB/s-15.1MiB/s (7339kB/s-15.9MB/s), io=41.7MiB (43.7MB), run=1003-1013msec 00:14:16.616 00:14:16.616 Disk stats (read/write): 00:14:16.616 nvme0n1: ios=3068/3072, merge=0/0, ticks=27544/22561, in_queue=50105, util=88.34% 00:14:16.616 nvme0n2: ios=2603/2728, merge=0/0, ticks=25678/24572, in_queue=50250, util=88.81% 00:14:16.616 nvme0n3: ios=1557/1594, merge=0/0, ticks=48224/41156, in_queue=89380, util=89.41% 00:14:16.616 nvme0n4: ios=1149/1536, merge=0/0, ticks=19969/31109, in_queue=51078, util=88.39% 00:14:16.616 06:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:16.616 06:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66908 00:14:16.616 06:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:16.616 06:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:16.616 [global] 00:14:16.616 thread=1 00:14:16.616 invalidate=1 00:14:16.616 rw=read 00:14:16.616 time_based=1 00:14:16.616 runtime=10 00:14:16.616 ioengine=libaio 00:14:16.616 direct=1 00:14:16.616 bs=4096 00:14:16.616 iodepth=1 00:14:16.616 norandommap=1 00:14:16.616 numjobs=1 00:14:16.616 00:14:16.616 [job0] 00:14:16.616 filename=/dev/nvme0n1 00:14:16.616 [job1] 00:14:16.616 filename=/dev/nvme0n2 00:14:16.616 [job2] 00:14:16.616 filename=/dev/nvme0n3 00:14:16.616 [job3] 00:14:16.616 filename=/dev/nvme0n4 00:14:16.616 Could not set queue depth (nvme0n1) 00:14:16.616 Could not set queue depth (nvme0n2) 00:14:16.616 Could not set queue depth (nvme0n3) 00:14:16.616 Could not set queue depth (nvme0n4) 00:14:16.874 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:16.874 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:16.874 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:16.874 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:16.874 fio-3.35 00:14:16.874 Starting 4 threads 00:14:20.158 06:09:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:20.159 fio: pid=66951, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:20.159 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=44494848, buflen=4096 00:14:20.159 06:09:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:20.159 fio: pid=66950, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:20.159 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=49504256, buflen=4096 00:14:20.159 06:09:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:20.159 06:09:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:20.417 fio: pid=66948, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:20.417 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=13684736, buflen=4096 00:14:20.417 06:09:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:20.417 06:09:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:20.676 fio: pid=66949, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:20.676 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=17264640, buflen=4096 00:14:20.676 06:09:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:20.676 06:09:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:20.676 00:14:20.676 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66948: Wed Nov 27 06:09:25 2024 00:14:20.676 read: IOPS=5650, BW=22.1MiB/s (23.1MB/s)(77.1MiB/3491msec) 00:14:20.676 slat (usec): min=9, max=16167, avg=14.53, stdev=172.12 00:14:20.676 clat (usec): min=90, max=3078, avg=161.18, stdev=41.16 00:14:20.676 lat (usec): min=142, max=16583, avg=175.71, stdev=178.65 00:14:20.676 clat percentiles (usec): 00:14:20.676 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:14:20.676 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 161], 00:14:20.676 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 182], 00:14:20.677 | 99.00th=[ 225], 99.50th=[ 260], 99.90th=[ 510], 99.95th=[ 725], 00:14:20.677 | 99.99th=[ 2704] 00:14:20.677 bw ( KiB/s): min=21120, max=23520, per=34.02%, avg=22776.00, stdev=1034.04, samples=6 00:14:20.677 iops : min= 5280, max= 5880, avg=5694.00, stdev=258.51, samples=6 00:14:20.677 lat (usec) : 100=0.01%, 250=99.45%, 500=0.44%, 750=0.06%, 1000=0.02% 00:14:20.677 lat (msec) : 2=0.01%, 4=0.02% 00:14:20.677 cpu : usr=1.43%, sys=6.22%, ctx=19730, majf=0, minf=1 00:14:20.677 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:20.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.677 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.677 issued rwts: total=19726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.677 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:20.677 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66949: Wed Nov 27 06:09:25 2024 00:14:20.677 read: IOPS=5449, BW=21.3MiB/s (22.3MB/s)(80.5MiB/3780msec) 00:14:20.677 slat (usec): min=10, max=13853, avg=14.99, stdev=170.99 00:14:20.677 clat (usec): min=131, max=152791, avg=167.15, stdev=1067.47 00:14:20.677 lat (usec): min=142, max=152804, avg=182.14, stdev=1081.19 00:14:20.677 clat percentiles (usec): 00:14:20.677 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 149], 00:14:20.677 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:14:20.677 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 172], 95.00th=[ 178], 00:14:20.677 | 99.00th=[ 192], 99.50th=[ 208], 99.90th=[ 494], 99.95th=[ 709], 00:14:20.677 | 99.99th=[ 2278] 00:14:20.677 bw ( KiB/s): min=14544, max=23536, per=32.85%, avg=21994.29, stdev=3324.31, samples=7 00:14:20.677 iops : min= 3636, max= 5884, avg=5498.57, stdev=831.08, samples=7 00:14:20.677 lat (usec) : 250=99.69%, 500=0.21%, 750=0.05%, 1000=0.01% 00:14:20.677 lat (msec) : 2=0.01%, 4=0.01%, 20=0.01%, 250=0.01% 00:14:20.677 cpu : usr=1.27%, sys=6.14%, ctx=20606, majf=0, minf=2 00:14:20.677 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:20.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.677 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.677 issued rwts: total=20600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.677 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:20.677 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66950: Wed Nov 27 06:09:25 2024 00:14:20.677 read: IOPS=3743, BW=14.6MiB/s (15.3MB/s)(47.2MiB/3229msec) 00:14:20.677 slat (usec): min=7, max=11289, avg=14.53, stdev=124.89 00:14:20.677 clat (usec): min=138, max=4009, avg=251.19, stdev=66.03 00:14:20.677 lat (usec): min=150, max=11473, avg=265.72, stdev=140.03 00:14:20.677 clat percentiles (usec): 00:14:20.677 | 1.00th=[ 155], 5.00th=[ 165], 10.00th=[ 174], 20.00th=[ 192], 00:14:20.677 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 273], 00:14:20.677 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 297], 00:14:20.677 | 99.00th=[ 322], 99.50th=[ 330], 99.90th=[ 668], 99.95th=[ 1037], 00:14:20.677 | 99.99th=[ 2212] 00:14:20.677 bw ( KiB/s): min=13760, max=17936, per=21.91%, avg=14672.00, stdev=1606.90, samples=6 00:14:20.677 iops : min= 3440, max= 4484, avg=3668.00, stdev=401.72, samples=6 00:14:20.677 lat (usec) : 250=27.18%, 500=72.66%, 750=0.07%, 1000=0.02% 00:14:20.677 lat (msec) : 2=0.04%, 4=0.01%, 10=0.01% 00:14:20.677 cpu : usr=1.49%, sys=4.28%, ctx=12091, majf=0, minf=1 00:14:20.677 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:20.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.677 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.677 issued rwts: total=12087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.677 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:20.677 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66951: Wed Nov 27 06:09:25 2024 00:14:20.677 read: IOPS=3646, BW=14.2MiB/s (14.9MB/s)(42.4MiB/2979msec) 00:14:20.677 slat (nsec): min=7425, max=67812, avg=12320.85, stdev=4840.25 00:14:20.677 clat (usec): min=142, max=7146, avg=260.53, stdev=85.45 00:14:20.677 lat (usec): min=166, max=7160, avg=272.85, stdev=85.11 00:14:20.677 clat percentiles (usec): 00:14:20.677 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 186], 20.00th=[ 251], 00:14:20.677 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:14:20.677 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 289], 95.00th=[ 297], 00:14:20.677 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 750], 99.95th=[ 1745], 00:14:20.677 | 99.99th=[ 2180] 00:14:20.677 bw ( KiB/s): min=13760, max=17888, per=22.03%, avg=14748.80, stdev=1759.51, samples=5 00:14:20.677 iops : min= 3440, max= 4472, avg=3687.20, stdev=439.88, samples=5 00:14:20.677 lat (usec) : 250=19.05%, 500=80.77%, 750=0.06%, 1000=0.04% 00:14:20.677 lat (msec) : 2=0.05%, 4=0.01%, 10=0.01% 00:14:20.677 cpu : usr=1.11%, sys=4.26%, ctx=10866, majf=0, minf=2 00:14:20.677 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:20.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.677 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.677 issued rwts: total=10864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.677 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:20.677 00:14:20.677 Run status group 0 (all jobs): 00:14:20.677 READ: bw=65.4MiB/s (68.6MB/s), 14.2MiB/s-22.1MiB/s (14.9MB/s-23.1MB/s), io=247MiB (259MB), run=2979-3780msec 00:14:20.677 00:14:20.677 Disk stats (read/write): 00:14:20.677 nvme0n1: ios=19005/0, merge=0/0, ticks=3088/0, in_queue=3088, util=95.05% 00:14:20.677 nvme0n2: ios=19657/0, merge=0/0, ticks=3344/0, in_queue=3344, util=95.40% 00:14:20.677 nvme0n3: ios=11497/0, merge=0/0, ticks=2814/0, in_queue=2814, util=96.21% 00:14:20.677 nvme0n4: ios=10489/0, merge=0/0, ticks=2612/0, in_queue=2612, util=96.56% 00:14:20.936 06:09:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:20.936 06:09:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:21.236 06:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:21.236 06:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:21.506 06:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:21.506 06:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:22.073 06:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:22.073 06:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:22.073 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:22.073 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66908 00:14:22.073 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:22.073 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:22.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.332 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:22.332 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:14:22.332 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:22.332 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:22.332 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:22.332 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:22.332 nvmf hotplug test: fio failed as expected 00:14:22.332 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:14:22.332 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:22.332 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:22.332 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:22.591 rmmod nvme_tcp 00:14:22.591 rmmod nvme_fabrics 00:14:22.591 rmmod nvme_keyring 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66522 ']' 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66522 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66522 ']' 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66522 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66522 00:14:22.591 killing process with pid 66522 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66522' 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66522 00:14:22.591 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66522 00:14:22.849 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:22.849 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:22.849 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:22.849 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:14:22.849 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:22.849 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:14:22.849 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:14:22.849 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:22.849 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:22.849 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:22.849 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:22.849 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:22.849 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:22.849 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:22.849 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:22.849 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:22.849 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:22.849 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:22.849 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:23.109 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:23.109 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:23.109 06:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:23.109 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:23.109 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.109 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.109 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.109 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:14:23.109 00:14:23.109 real 0m20.020s 00:14:23.109 user 1m14.940s 00:14:23.109 sys 0m9.907s 00:14:23.109 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:23.109 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.109 ************************************ 00:14:23.109 END TEST nvmf_fio_target 00:14:23.109 ************************************ 00:14:23.109 06:09:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:23.109 06:09:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:23.109 06:09:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:23.109 06:09:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:23.109 ************************************ 00:14:23.109 START TEST nvmf_bdevio 00:14:23.109 ************************************ 00:14:23.109 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:23.109 * Looking for test storage... 00:14:23.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:23.109 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:23.109 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:14:23.109 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:23.369 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:23.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.369 --rc genhtml_branch_coverage=1 00:14:23.369 --rc genhtml_function_coverage=1 00:14:23.369 --rc genhtml_legend=1 00:14:23.369 --rc geninfo_all_blocks=1 00:14:23.369 --rc geninfo_unexecuted_blocks=1 00:14:23.370 00:14:23.370 ' 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:23.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.370 --rc genhtml_branch_coverage=1 00:14:23.370 --rc genhtml_function_coverage=1 00:14:23.370 --rc genhtml_legend=1 00:14:23.370 --rc geninfo_all_blocks=1 00:14:23.370 --rc geninfo_unexecuted_blocks=1 00:14:23.370 00:14:23.370 ' 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:23.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.370 --rc genhtml_branch_coverage=1 00:14:23.370 --rc genhtml_function_coverage=1 00:14:23.370 --rc genhtml_legend=1 00:14:23.370 --rc geninfo_all_blocks=1 00:14:23.370 --rc geninfo_unexecuted_blocks=1 00:14:23.370 00:14:23.370 ' 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:23.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.370 --rc genhtml_branch_coverage=1 00:14:23.370 --rc genhtml_function_coverage=1 00:14:23.370 --rc genhtml_legend=1 00:14:23.370 --rc geninfo_all_blocks=1 00:14:23.370 --rc geninfo_unexecuted_blocks=1 00:14:23.370 00:14:23.370 ' 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:23.370 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.370 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:23.371 Cannot find device "nvmf_init_br" 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:23.371 Cannot find device "nvmf_init_br2" 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:23.371 Cannot find device "nvmf_tgt_br" 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:23.371 Cannot find device "nvmf_tgt_br2" 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:23.371 Cannot find device "nvmf_init_br" 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:23.371 Cannot find device "nvmf_init_br2" 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:23.371 Cannot find device "nvmf_tgt_br" 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:23.371 Cannot find device "nvmf_tgt_br2" 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:23.371 Cannot find device "nvmf_br" 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:23.371 Cannot find device "nvmf_init_if" 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:23.371 Cannot find device "nvmf_init_if2" 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:23.371 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:23.371 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:23.371 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:23.631 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:23.632 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:23.632 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:14:23.632 00:14:23.632 --- 10.0.0.3 ping statistics --- 00:14:23.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.632 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:23.632 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:23.632 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:14:23.632 00:14:23.632 --- 10.0.0.4 ping statistics --- 00:14:23.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.632 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:23.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:14:23.632 00:14:23.632 --- 10.0.0.1 ping statistics --- 00:14:23.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.632 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:23.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:14:23.632 00:14:23.632 --- 10.0.0.2 ping statistics --- 00:14:23.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.632 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=67274 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 67274 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 67274 ']' 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:23.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:23.632 06:09:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:23.891 [2024-11-27 06:09:28.748024] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:14:23.891 [2024-11-27 06:09:28.748120] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.891 [2024-11-27 06:09:28.899260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:23.891 [2024-11-27 06:09:28.958529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.891 [2024-11-27 06:09:28.958585] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.891 [2024-11-27 06:09:28.958612] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.891 [2024-11-27 06:09:28.958621] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.891 [2024-11-27 06:09:28.958628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.891 [2024-11-27 06:09:28.960217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:23.891 [2024-11-27 06:09:28.960258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:23.891 [2024-11-27 06:09:28.960327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:23.891 [2024-11-27 06:09:28.960334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:24.151 [2024-11-27 06:09:29.014761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:24.151 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:24.151 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:14:24.151 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:24.151 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:24.151 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:24.151 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:24.152 [2024-11-27 06:09:29.130534] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:24.152 Malloc0 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:24.152 [2024-11-27 06:09:29.195402] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:24.152 { 00:14:24.152 "params": { 00:14:24.152 "name": "Nvme$subsystem", 00:14:24.152 "trtype": "$TEST_TRANSPORT", 00:14:24.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:24.152 "adrfam": "ipv4", 00:14:24.152 "trsvcid": "$NVMF_PORT", 00:14:24.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:24.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:24.152 "hdgst": ${hdgst:-false}, 00:14:24.152 "ddgst": ${ddgst:-false} 00:14:24.152 }, 00:14:24.152 "method": "bdev_nvme_attach_controller" 00:14:24.152 } 00:14:24.152 EOF 00:14:24.152 )") 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:14:24.152 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:24.152 "params": { 00:14:24.152 "name": "Nvme1", 00:14:24.152 "trtype": "tcp", 00:14:24.152 "traddr": "10.0.0.3", 00:14:24.152 "adrfam": "ipv4", 00:14:24.152 "trsvcid": "4420", 00:14:24.152 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:24.152 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:24.152 "hdgst": false, 00:14:24.152 "ddgst": false 00:14:24.152 }, 00:14:24.152 "method": "bdev_nvme_attach_controller" 00:14:24.152 }' 00:14:24.411 [2024-11-27 06:09:29.258892] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:14:24.411 [2024-11-27 06:09:29.258991] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67302 ] 00:14:24.411 [2024-11-27 06:09:29.414449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:24.411 [2024-11-27 06:09:29.482576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.411 [2024-11-27 06:09:29.482725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.411 [2024-11-27 06:09:29.482731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.669 [2024-11-27 06:09:29.548373] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:24.669 I/O targets: 00:14:24.669 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:24.669 00:14:24.669 00:14:24.669 CUnit - A unit testing framework for C - Version 2.1-3 00:14:24.669 http://cunit.sourceforge.net/ 00:14:24.669 00:14:24.669 00:14:24.669 Suite: bdevio tests on: Nvme1n1 00:14:24.669 Test: blockdev write read block ...passed 00:14:24.669 Test: blockdev write zeroes read block ...passed 00:14:24.669 Test: blockdev write zeroes read no split ...passed 00:14:24.669 Test: blockdev write zeroes read split ...passed 00:14:24.669 Test: blockdev write zeroes read split partial ...passed 00:14:24.669 Test: blockdev reset ...[2024-11-27 06:09:29.705065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:24.669 [2024-11-27 06:09:29.705185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x738190 (9): Bad file descriptor 00:14:24.669 [2024-11-27 06:09:29.721776] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:24.669 passed 00:14:24.669 Test: blockdev write read 8 blocks ...passed 00:14:24.669 Test: blockdev write read size > 128k ...passed 00:14:24.669 Test: blockdev write read invalid size ...passed 00:14:24.669 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:24.669 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:24.669 Test: blockdev write read max offset ...passed 00:14:24.669 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:24.669 Test: blockdev writev readv 8 blocks ...passed 00:14:24.669 Test: blockdev writev readv 30 x 1block ...passed 00:14:24.669 Test: blockdev writev readv block ...passed 00:14:24.669 Test: blockdev writev readv size > 128k ...passed 00:14:24.669 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:24.669 Test: blockdev comparev and writev ...[2024-11-27 06:09:29.728915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:24.669 [2024-11-27 06:09:29.729051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:24.669 [2024-11-27 06:09:29.729168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:24.669 [2024-11-27 06:09:29.729285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:24.669 [2024-11-27 06:09:29.729798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:24.669 [2024-11-27 06:09:29.729907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:24.669 [2024-11-27 06:09:29.729989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:24.669 [2024-11-27 06:09:29.730084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:24.669 [2024-11-27 06:09:29.730541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:24.669 [2024-11-27 06:09:29.730632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:24.669 [2024-11-27 06:09:29.730717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:24.669 [2024-11-27 06:09:29.730788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:24.669 [2024-11-27 06:09:29.731263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:24.669 [2024-11-27 06:09:29.731352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:24.669 [2024-11-27 06:09:29.731454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:24.669 [2024-11-27 06:09:29.731517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:24.669 passed 00:14:24.669 Test: blockdev nvme passthru rw ...passed 00:14:24.669 Test: blockdev nvme passthru vendor specific ...[2024-11-27 06:09:29.732393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:24.669 [2024-11-27 06:09:29.732494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:24.669 [2024-11-27 06:09:29.732675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:24.669 [2024-11-27 06:09:29.732777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:24.670 [2024-11-27 06:09:29.732974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:24.670 [2024-11-27 06:09:29.733063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:24.670 [2024-11-27 06:09:29.733264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:24.670 [2024-11-27 06:09:29.733364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:24.670 passed 00:14:24.670 Test: blockdev nvme admin passthru ...passed 00:14:24.670 Test: blockdev copy ...passed 00:14:24.670 00:14:24.670 Run Summary: Type Total Ran Passed Failed Inactive 00:14:24.670 suites 1 1 n/a 0 0 00:14:24.670 tests 23 23 23 0 0 00:14:24.670 asserts 152 152 152 0 n/a 00:14:24.670 00:14:24.670 Elapsed time = 0.145 seconds 00:14:24.928 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:24.928 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.928 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:24.928 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.928 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:24.928 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:24.928 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:24.928 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:14:24.928 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:24.928 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:14:24.928 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:24.928 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:24.928 rmmod nvme_tcp 00:14:24.928 rmmod nvme_fabrics 00:14:24.928 rmmod nvme_keyring 00:14:25.187 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:25.187 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:14:25.187 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:14:25.187 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 67274 ']' 00:14:25.187 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 67274 00:14:25.187 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 67274 ']' 00:14:25.187 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 67274 00:14:25.187 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:14:25.187 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:25.187 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67274 00:14:25.187 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:14:25.187 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:14:25.187 killing process with pid 67274 00:14:25.187 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67274' 00:14:25.187 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 67274 00:14:25.187 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 67274 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:25.446 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.704 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:14:25.704 00:14:25.704 real 0m2.469s 00:14:25.704 user 0m6.671s 00:14:25.704 sys 0m0.859s 00:14:25.704 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.704 ************************************ 00:14:25.704 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:25.704 END TEST nvmf_bdevio 00:14:25.704 ************************************ 00:14:25.704 06:09:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:25.704 00:14:25.704 real 2m36.877s 00:14:25.704 user 6m51.012s 00:14:25.704 sys 0m52.429s 00:14:25.704 06:09:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.704 06:09:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:25.704 ************************************ 00:14:25.704 END TEST nvmf_target_core 00:14:25.704 ************************************ 00:14:25.704 06:09:30 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:25.704 06:09:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:25.704 06:09:30 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.705 06:09:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:25.705 ************************************ 00:14:25.705 START TEST nvmf_target_extra 00:14:25.705 ************************************ 00:14:25.705 06:09:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:25.705 * Looking for test storage... 00:14:25.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:25.705 06:09:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:25.705 06:09:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:25.705 06:09:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:25.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.963 --rc genhtml_branch_coverage=1 00:14:25.963 --rc genhtml_function_coverage=1 00:14:25.963 --rc genhtml_legend=1 00:14:25.963 --rc geninfo_all_blocks=1 00:14:25.963 --rc geninfo_unexecuted_blocks=1 00:14:25.963 00:14:25.963 ' 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:25.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.963 --rc genhtml_branch_coverage=1 00:14:25.963 --rc genhtml_function_coverage=1 00:14:25.963 --rc genhtml_legend=1 00:14:25.963 --rc geninfo_all_blocks=1 00:14:25.963 --rc geninfo_unexecuted_blocks=1 00:14:25.963 00:14:25.963 ' 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:25.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.963 --rc genhtml_branch_coverage=1 00:14:25.963 --rc genhtml_function_coverage=1 00:14:25.963 --rc genhtml_legend=1 00:14:25.963 --rc geninfo_all_blocks=1 00:14:25.963 --rc geninfo_unexecuted_blocks=1 00:14:25.963 00:14:25.963 ' 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:25.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.963 --rc genhtml_branch_coverage=1 00:14:25.963 --rc genhtml_function_coverage=1 00:14:25.963 --rc genhtml_legend=1 00:14:25.963 --rc geninfo_all_blocks=1 00:14:25.963 --rc geninfo_unexecuted_blocks=1 00:14:25.963 00:14:25.963 ' 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.963 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:25.964 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:25.964 ************************************ 00:14:25.964 START TEST nvmf_auth_target 00:14:25.964 ************************************ 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:25.964 * Looking for test storage... 00:14:25.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:25.964 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:25.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.964 --rc genhtml_branch_coverage=1 00:14:25.964 --rc genhtml_function_coverage=1 00:14:25.964 --rc genhtml_legend=1 00:14:25.964 --rc geninfo_all_blocks=1 00:14:25.964 --rc geninfo_unexecuted_blocks=1 00:14:25.964 00:14:25.964 ' 00:14:25.964 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:25.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.964 --rc genhtml_branch_coverage=1 00:14:25.964 --rc genhtml_function_coverage=1 00:14:25.964 --rc genhtml_legend=1 00:14:25.964 --rc geninfo_all_blocks=1 00:14:25.965 --rc geninfo_unexecuted_blocks=1 00:14:25.965 00:14:25.965 ' 00:14:25.965 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:25.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.965 --rc genhtml_branch_coverage=1 00:14:25.965 --rc genhtml_function_coverage=1 00:14:25.965 --rc genhtml_legend=1 00:14:25.965 --rc geninfo_all_blocks=1 00:14:25.965 --rc geninfo_unexecuted_blocks=1 00:14:25.965 00:14:25.965 ' 00:14:25.965 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:25.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.965 --rc genhtml_branch_coverage=1 00:14:25.965 --rc genhtml_function_coverage=1 00:14:25.965 --rc genhtml_legend=1 00:14:25.965 --rc geninfo_all_blocks=1 00:14:25.965 --rc geninfo_unexecuted_blocks=1 00:14:25.965 00:14:25.965 ' 00:14:25.965 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:25.965 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:26.223 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.223 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.223 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.223 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.223 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.223 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:26.224 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:26.224 Cannot find device "nvmf_init_br" 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:26.224 Cannot find device "nvmf_init_br2" 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:26.224 Cannot find device "nvmf_tgt_br" 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:14:26.224 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:26.224 Cannot find device "nvmf_tgt_br2" 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:26.225 Cannot find device "nvmf_init_br" 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:26.225 Cannot find device "nvmf_init_br2" 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:26.225 Cannot find device "nvmf_tgt_br" 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:26.225 Cannot find device "nvmf_tgt_br2" 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:26.225 Cannot find device "nvmf_br" 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:26.225 Cannot find device "nvmf_init_if" 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:26.225 Cannot find device "nvmf_init_if2" 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:26.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:26.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:26.225 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:26.483 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:26.483 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:14:26.483 00:14:26.483 --- 10.0.0.3 ping statistics --- 00:14:26.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.483 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:26.483 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:26.483 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:14:26.483 00:14:26.483 --- 10.0.0.4 ping statistics --- 00:14:26.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.483 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:26.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:14:26.483 00:14:26.483 --- 10.0.0.1 ping statistics --- 00:14:26.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.483 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:26.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:14:26.483 00:14:26.483 --- 10.0.0.2 ping statistics --- 00:14:26.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.483 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67585 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67585 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67585 ']' 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.483 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67617 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=caaad709e781cc41766b6cb9409bf9596cef956394fddbeb 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.FEo 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key caaad709e781cc41766b6cb9409bf9596cef956394fddbeb 0 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 caaad709e781cc41766b6cb9409bf9596cef956394fddbeb 0 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=caaad709e781cc41766b6cb9409bf9596cef956394fddbeb 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.FEo 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.FEo 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.FEo 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e9216c55067b9730e4f4a71a26bf719b0e2bf409ca33b6092630178f4ba13bad 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Idj 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e9216c55067b9730e4f4a71a26bf719b0e2bf409ca33b6092630178f4ba13bad 3 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e9216c55067b9730e4f4a71a26bf719b0e2bf409ca33b6092630178f4ba13bad 3 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e9216c55067b9730e4f4a71a26bf719b0e2bf409ca33b6092630178f4ba13bad 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Idj 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Idj 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Idj 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b566e5b1363ff9bb05bfe502e5b72b6b 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.kWO 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b566e5b1363ff9bb05bfe502e5b72b6b 1 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b566e5b1363ff9bb05bfe502e5b72b6b 1 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b566e5b1363ff9bb05bfe502e5b72b6b 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.kWO 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.kWO 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.kWO 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8516538c99deeefb710ed855edc245107734f15389896fdc 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.qg5 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8516538c99deeefb710ed855edc245107734f15389896fdc 2 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8516538c99deeefb710ed855edc245107734f15389896fdc 2 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8516538c99deeefb710ed855edc245107734f15389896fdc 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.qg5 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.qg5 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.qg5 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:27.882 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:27.883 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:27.883 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:27.883 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:27.883 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:27.883 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:27.883 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ae6263bcca975e18a5734c232711c4bc5845c106451061ea 00:14:27.883 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:27.883 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.blT 00:14:27.883 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ae6263bcca975e18a5734c232711c4bc5845c106451061ea 2 00:14:27.883 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ae6263bcca975e18a5734c232711c4bc5845c106451061ea 2 00:14:27.883 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:27.883 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:27.883 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ae6263bcca975e18a5734c232711c4bc5845c106451061ea 00:14:27.883 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:27.883 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:28.142 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.blT 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.blT 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.blT 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3810bb72ec6c11221e2545ebbadaf1fc 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.L2h 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3810bb72ec6c11221e2545ebbadaf1fc 1 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3810bb72ec6c11221e2545ebbadaf1fc 1 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3810bb72ec6c11221e2545ebbadaf1fc 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.L2h 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.L2h 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.L2h 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a7bb03663d2bc872207c85a4a013c0348ee7a755431a1a6d761a59b1f3e2b450 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.aaF 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a7bb03663d2bc872207c85a4a013c0348ee7a755431a1a6d761a59b1f3e2b450 3 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a7bb03663d2bc872207c85a4a013c0348ee7a755431a1a6d761a59b1f3e2b450 3 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a7bb03663d2bc872207c85a4a013c0348ee7a755431a1a6d761a59b1f3e2b450 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.aaF 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.aaF 00:14:28.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.aaF 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67585 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67585 ']' 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.142 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:28.709 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:28.709 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:28.709 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67617 /var/tmp/host.sock 00:14:28.709 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67617 ']' 00:14:28.709 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:28.709 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.709 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:28.709 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.709 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.968 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:28.968 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:28.968 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:28.968 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.968 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.968 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.968 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:28.968 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.FEo 00:14:28.968 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.968 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.968 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.968 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.FEo 00:14:28.968 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.FEo 00:14:29.227 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Idj ]] 00:14:29.227 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Idj 00:14:29.227 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.227 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.227 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.227 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Idj 00:14:29.227 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Idj 00:14:29.485 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:29.485 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kWO 00:14:29.485 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.485 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.485 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.485 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.kWO 00:14:29.485 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.kWO 00:14:29.744 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.qg5 ]] 00:14:29.744 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qg5 00:14:29.744 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.744 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.744 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.744 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qg5 00:14:29.744 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qg5 00:14:30.003 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:30.003 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.blT 00:14:30.003 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.003 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.003 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.003 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.blT 00:14:30.003 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.blT 00:14:30.262 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.L2h ]] 00:14:30.262 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L2h 00:14:30.262 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.262 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.262 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.262 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L2h 00:14:30.262 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L2h 00:14:30.520 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:30.520 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.aaF 00:14:30.520 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.520 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.520 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.520 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.aaF 00:14:30.520 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.aaF 00:14:30.779 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:30.780 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:30.780 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:30.780 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.780 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:30.780 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:31.039 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:31.039 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:31.039 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:31.039 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:31.039 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:31.039 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.039 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.039 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.039 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.297 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.297 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.297 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.298 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.555 00:14:31.555 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:31.555 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:31.555 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.813 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.813 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.813 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.813 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.813 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.813 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:31.813 { 00:14:31.814 "cntlid": 1, 00:14:31.814 "qid": 0, 00:14:31.814 "state": "enabled", 00:14:31.814 "thread": "nvmf_tgt_poll_group_000", 00:14:31.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:14:31.814 "listen_address": { 00:14:31.814 "trtype": "TCP", 00:14:31.814 "adrfam": "IPv4", 00:14:31.814 "traddr": "10.0.0.3", 00:14:31.814 "trsvcid": "4420" 00:14:31.814 }, 00:14:31.814 "peer_address": { 00:14:31.814 "trtype": "TCP", 00:14:31.814 "adrfam": "IPv4", 00:14:31.814 "traddr": "10.0.0.1", 00:14:31.814 "trsvcid": "38082" 00:14:31.814 }, 00:14:31.814 "auth": { 00:14:31.814 "state": "completed", 00:14:31.814 "digest": "sha256", 00:14:31.814 "dhgroup": "null" 00:14:31.814 } 00:14:31.814 } 00:14:31.814 ]' 00:14:31.814 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.814 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:31.814 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.814 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:31.814 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:32.071 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.071 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.071 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.330 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:14:32.330 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:14:36.523 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.523 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:14:36.523 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.523 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.523 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.523 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.523 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:36.523 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:37.092 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:37.092 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.092 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:37.092 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:37.092 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:37.092 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.092 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.092 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.092 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.092 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.092 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.092 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.092 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.351 00:14:37.351 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.351 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.351 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.610 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.610 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.610 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.610 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.610 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.610 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.610 { 00:14:37.610 "cntlid": 3, 00:14:37.610 "qid": 0, 00:14:37.610 "state": "enabled", 00:14:37.610 "thread": "nvmf_tgt_poll_group_000", 00:14:37.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:14:37.610 "listen_address": { 00:14:37.610 "trtype": "TCP", 00:14:37.610 "adrfam": "IPv4", 00:14:37.610 "traddr": "10.0.0.3", 00:14:37.610 "trsvcid": "4420" 00:14:37.610 }, 00:14:37.610 "peer_address": { 00:14:37.610 "trtype": "TCP", 00:14:37.610 "adrfam": "IPv4", 00:14:37.610 "traddr": "10.0.0.1", 00:14:37.610 "trsvcid": "38106" 00:14:37.610 }, 00:14:37.610 "auth": { 00:14:37.610 "state": "completed", 00:14:37.610 "digest": "sha256", 00:14:37.610 "dhgroup": "null" 00:14:37.610 } 00:14:37.610 } 00:14:37.610 ]' 00:14:37.610 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.610 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.610 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.610 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:37.610 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.610 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.610 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.610 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.869 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:14:37.869 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:14:38.805 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.805 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:14:38.805 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.805 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.805 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.805 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.805 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:38.805 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:39.064 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:39.064 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.064 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:39.064 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:39.064 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:39.064 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.064 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.064 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.064 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.064 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.064 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.064 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.064 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.323 00:14:39.323 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.323 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.323 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.582 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.582 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.582 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.582 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.582 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.582 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.582 { 00:14:39.582 "cntlid": 5, 00:14:39.582 "qid": 0, 00:14:39.582 "state": "enabled", 00:14:39.582 "thread": "nvmf_tgt_poll_group_000", 00:14:39.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:14:39.582 "listen_address": { 00:14:39.582 "trtype": "TCP", 00:14:39.582 "adrfam": "IPv4", 00:14:39.582 "traddr": "10.0.0.3", 00:14:39.582 "trsvcid": "4420" 00:14:39.582 }, 00:14:39.582 "peer_address": { 00:14:39.582 "trtype": "TCP", 00:14:39.582 "adrfam": "IPv4", 00:14:39.582 "traddr": "10.0.0.1", 00:14:39.582 "trsvcid": "38136" 00:14:39.582 }, 00:14:39.582 "auth": { 00:14:39.582 "state": "completed", 00:14:39.582 "digest": "sha256", 00:14:39.582 "dhgroup": "null" 00:14:39.582 } 00:14:39.582 } 00:14:39.582 ]' 00:14:39.582 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.582 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.582 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.582 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:39.582 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.841 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.841 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.841 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.099 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:14:40.099 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:14:40.666 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.666 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:14:40.666 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.666 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.666 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.666 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.666 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:40.667 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:40.925 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:40.925 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:40.925 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:40.925 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:40.925 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:40.925 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.925 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key3 00:14:40.925 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.925 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.925 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.925 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:40.925 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:40.925 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:41.184 00:14:41.184 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:41.184 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.184 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.443 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.443 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.443 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.443 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.737 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.737 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:41.737 { 00:14:41.737 "cntlid": 7, 00:14:41.737 "qid": 0, 00:14:41.737 "state": "enabled", 00:14:41.737 "thread": "nvmf_tgt_poll_group_000", 00:14:41.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:14:41.737 "listen_address": { 00:14:41.737 "trtype": "TCP", 00:14:41.737 "adrfam": "IPv4", 00:14:41.737 "traddr": "10.0.0.3", 00:14:41.737 "trsvcid": "4420" 00:14:41.737 }, 00:14:41.737 "peer_address": { 00:14:41.737 "trtype": "TCP", 00:14:41.737 "adrfam": "IPv4", 00:14:41.737 "traddr": "10.0.0.1", 00:14:41.737 "trsvcid": "44098" 00:14:41.737 }, 00:14:41.737 "auth": { 00:14:41.737 "state": "completed", 00:14:41.737 "digest": "sha256", 00:14:41.737 "dhgroup": "null" 00:14:41.737 } 00:14:41.737 } 00:14:41.737 ]' 00:14:41.737 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:41.737 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:41.737 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:41.737 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:41.737 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:41.737 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.737 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.737 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.996 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:14:41.996 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:14:42.562 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.562 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:14:42.562 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.562 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.562 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.562 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:42.562 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:42.562 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:42.562 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:43.129 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:43.129 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.129 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:43.129 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:43.129 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:43.129 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.129 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.129 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.129 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.129 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.129 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.129 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.129 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.388 00:14:43.388 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.388 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.388 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.647 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.647 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.647 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.647 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.647 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.647 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:43.647 { 00:14:43.647 "cntlid": 9, 00:14:43.647 "qid": 0, 00:14:43.647 "state": "enabled", 00:14:43.647 "thread": "nvmf_tgt_poll_group_000", 00:14:43.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:14:43.647 "listen_address": { 00:14:43.647 "trtype": "TCP", 00:14:43.647 "adrfam": "IPv4", 00:14:43.647 "traddr": "10.0.0.3", 00:14:43.647 "trsvcid": "4420" 00:14:43.647 }, 00:14:43.647 "peer_address": { 00:14:43.647 "trtype": "TCP", 00:14:43.647 "adrfam": "IPv4", 00:14:43.647 "traddr": "10.0.0.1", 00:14:43.647 "trsvcid": "44126" 00:14:43.647 }, 00:14:43.647 "auth": { 00:14:43.647 "state": "completed", 00:14:43.647 "digest": "sha256", 00:14:43.647 "dhgroup": "ffdhe2048" 00:14:43.647 } 00:14:43.647 } 00:14:43.647 ]' 00:14:43.647 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:43.647 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.647 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:43.647 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:43.647 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:43.647 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.647 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.647 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.906 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:14:43.906 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:14:44.841 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.841 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:14:44.841 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.841 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.841 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.841 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:44.841 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:44.841 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:45.099 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:45.099 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.099 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:45.099 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:45.099 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:45.099 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.099 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.099 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.099 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.099 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.099 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.099 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.099 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.357 00:14:45.357 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.357 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.357 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:45.615 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.615 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.615 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.615 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.615 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.615 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:45.615 { 00:14:45.615 "cntlid": 11, 00:14:45.615 "qid": 0, 00:14:45.615 "state": "enabled", 00:14:45.615 "thread": "nvmf_tgt_poll_group_000", 00:14:45.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:14:45.615 "listen_address": { 00:14:45.615 "trtype": "TCP", 00:14:45.615 "adrfam": "IPv4", 00:14:45.615 "traddr": "10.0.0.3", 00:14:45.615 "trsvcid": "4420" 00:14:45.615 }, 00:14:45.615 "peer_address": { 00:14:45.615 "trtype": "TCP", 00:14:45.615 "adrfam": "IPv4", 00:14:45.615 "traddr": "10.0.0.1", 00:14:45.615 "trsvcid": "44146" 00:14:45.615 }, 00:14:45.615 "auth": { 00:14:45.615 "state": "completed", 00:14:45.615 "digest": "sha256", 00:14:45.615 "dhgroup": "ffdhe2048" 00:14:45.615 } 00:14:45.615 } 00:14:45.615 ]' 00:14:45.616 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:45.874 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.874 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:45.874 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:45.874 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:45.874 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.874 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.875 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.133 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:14:46.133 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:14:46.701 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.701 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:14:46.701 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.701 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.960 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.960 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:46.960 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:46.960 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:47.220 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:47.220 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.220 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:47.220 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:47.220 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:47.220 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.220 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.220 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.220 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.220 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.220 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.220 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.220 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.479 00:14:47.479 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:47.479 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.479 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:47.737 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.737 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.737 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.737 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.737 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.737 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:47.737 { 00:14:47.737 "cntlid": 13, 00:14:47.737 "qid": 0, 00:14:47.737 "state": "enabled", 00:14:47.737 "thread": "nvmf_tgt_poll_group_000", 00:14:47.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:14:47.737 "listen_address": { 00:14:47.737 "trtype": "TCP", 00:14:47.737 "adrfam": "IPv4", 00:14:47.737 "traddr": "10.0.0.3", 00:14:47.737 "trsvcid": "4420" 00:14:47.737 }, 00:14:47.737 "peer_address": { 00:14:47.737 "trtype": "TCP", 00:14:47.737 "adrfam": "IPv4", 00:14:47.737 "traddr": "10.0.0.1", 00:14:47.737 "trsvcid": "44180" 00:14:47.737 }, 00:14:47.737 "auth": { 00:14:47.737 "state": "completed", 00:14:47.737 "digest": "sha256", 00:14:47.737 "dhgroup": "ffdhe2048" 00:14:47.737 } 00:14:47.737 } 00:14:47.737 ]' 00:14:47.737 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.052 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.052 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.052 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:48.052 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.052 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.052 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.052 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.335 06:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:14:48.335 06:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:14:48.903 06:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.903 06:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:14:48.903 06:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.903 06:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.903 06:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.903 06:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.903 06:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:48.903 06:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:49.162 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:49.162 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.162 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:49.162 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:49.162 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:49.162 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.162 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key3 00:14:49.162 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.162 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.162 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.162 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:49.162 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:49.162 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:49.728 00:14:49.728 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:49.728 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:49.728 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.988 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.988 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.988 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.988 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.988 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.988 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.988 { 00:14:49.988 "cntlid": 15, 00:14:49.988 "qid": 0, 00:14:49.988 "state": "enabled", 00:14:49.988 "thread": "nvmf_tgt_poll_group_000", 00:14:49.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:14:49.988 "listen_address": { 00:14:49.988 "trtype": "TCP", 00:14:49.988 "adrfam": "IPv4", 00:14:49.988 "traddr": "10.0.0.3", 00:14:49.988 "trsvcid": "4420" 00:14:49.988 }, 00:14:49.988 "peer_address": { 00:14:49.988 "trtype": "TCP", 00:14:49.988 "adrfam": "IPv4", 00:14:49.988 "traddr": "10.0.0.1", 00:14:49.988 "trsvcid": "44224" 00:14:49.988 }, 00:14:49.988 "auth": { 00:14:49.988 "state": "completed", 00:14:49.988 "digest": "sha256", 00:14:49.988 "dhgroup": "ffdhe2048" 00:14:49.988 } 00:14:49.988 } 00:14:49.988 ]' 00:14:49.988 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.988 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:49.988 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.988 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:49.988 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.988 06:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.988 06:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.988 06:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.246 06:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:14:50.246 06:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:14:51.178 06:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.178 06:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:14:51.178 06:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.178 06:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.178 06:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.178 06:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:51.178 06:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.178 06:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:51.178 06:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:51.437 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:51.437 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.437 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:51.437 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:51.437 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:51.437 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.437 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.437 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.437 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.437 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.437 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.437 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.437 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.695 00:14:51.695 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.695 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.695 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.953 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.953 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.953 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.953 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.953 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.953 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.953 { 00:14:51.953 "cntlid": 17, 00:14:51.953 "qid": 0, 00:14:51.953 "state": "enabled", 00:14:51.953 "thread": "nvmf_tgt_poll_group_000", 00:14:51.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:14:51.953 "listen_address": { 00:14:51.953 "trtype": "TCP", 00:14:51.953 "adrfam": "IPv4", 00:14:51.953 "traddr": "10.0.0.3", 00:14:51.953 "trsvcid": "4420" 00:14:51.953 }, 00:14:51.953 "peer_address": { 00:14:51.953 "trtype": "TCP", 00:14:51.953 "adrfam": "IPv4", 00:14:51.953 "traddr": "10.0.0.1", 00:14:51.953 "trsvcid": "55788" 00:14:51.953 }, 00:14:51.953 "auth": { 00:14:51.953 "state": "completed", 00:14:51.953 "digest": "sha256", 00:14:51.953 "dhgroup": "ffdhe3072" 00:14:51.953 } 00:14:51.953 } 00:14:51.953 ]' 00:14:51.953 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.953 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:51.953 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.953 06:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:51.953 06:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.212 06:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.212 06:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.212 06:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.478 06:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:14:52.478 06:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:14:53.055 06:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.055 06:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:14:53.055 06:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.055 06:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.055 06:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.055 06:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.055 06:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:53.055 06:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:53.321 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:53.321 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.321 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:53.321 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:53.321 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:53.321 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.321 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.321 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.321 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.321 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.321 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.321 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.321 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.890 00:14:53.890 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.890 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.890 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.150 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.150 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.150 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.150 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.150 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.150 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.150 { 00:14:54.150 "cntlid": 19, 00:14:54.150 "qid": 0, 00:14:54.150 "state": "enabled", 00:14:54.150 "thread": "nvmf_tgt_poll_group_000", 00:14:54.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:14:54.150 "listen_address": { 00:14:54.150 "trtype": "TCP", 00:14:54.150 "adrfam": "IPv4", 00:14:54.150 "traddr": "10.0.0.3", 00:14:54.150 "trsvcid": "4420" 00:14:54.150 }, 00:14:54.150 "peer_address": { 00:14:54.150 "trtype": "TCP", 00:14:54.150 "adrfam": "IPv4", 00:14:54.150 "traddr": "10.0.0.1", 00:14:54.150 "trsvcid": "55816" 00:14:54.150 }, 00:14:54.150 "auth": { 00:14:54.150 "state": "completed", 00:14:54.150 "digest": "sha256", 00:14:54.150 "dhgroup": "ffdhe3072" 00:14:54.150 } 00:14:54.150 } 00:14:54.150 ]' 00:14:54.150 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.150 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.150 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.150 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:54.150 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.150 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.150 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.150 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.409 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:14:54.409 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:14:55.343 06:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.344 06:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:14:55.344 06:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.344 06:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.344 06:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.344 06:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.344 06:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:55.344 06:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:55.602 06:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:55.603 06:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.603 06:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:55.603 06:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:55.603 06:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:55.603 06:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.603 06:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.603 06:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.603 06:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.603 06:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.603 06:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.603 06:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.603 06:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.861 00:14:55.861 06:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.861 06:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.861 06:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.120 06:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.120 06:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.120 06:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.120 06:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.120 06:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.120 06:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.120 { 00:14:56.120 "cntlid": 21, 00:14:56.120 "qid": 0, 00:14:56.120 "state": "enabled", 00:14:56.120 "thread": "nvmf_tgt_poll_group_000", 00:14:56.120 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:14:56.120 "listen_address": { 00:14:56.120 "trtype": "TCP", 00:14:56.120 "adrfam": "IPv4", 00:14:56.120 "traddr": "10.0.0.3", 00:14:56.120 "trsvcid": "4420" 00:14:56.120 }, 00:14:56.120 "peer_address": { 00:14:56.120 "trtype": "TCP", 00:14:56.120 "adrfam": "IPv4", 00:14:56.121 "traddr": "10.0.0.1", 00:14:56.121 "trsvcid": "55838" 00:14:56.121 }, 00:14:56.121 "auth": { 00:14:56.121 "state": "completed", 00:14:56.121 "digest": "sha256", 00:14:56.121 "dhgroup": "ffdhe3072" 00:14:56.121 } 00:14:56.121 } 00:14:56.121 ]' 00:14:56.121 06:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.121 06:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:56.121 06:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.121 06:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:56.121 06:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.379 06:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.379 06:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.379 06:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.636 06:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:14:56.636 06:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:14:57.201 06:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.201 06:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:14:57.201 06:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.201 06:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.201 06:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.201 06:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.201 06:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:57.201 06:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:57.460 06:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:57.460 06:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.460 06:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:57.460 06:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:57.460 06:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:57.460 06:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.460 06:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key3 00:14:57.460 06:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.460 06:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.460 06:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.460 06:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:57.460 06:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:57.460 06:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:58.028 00:14:58.028 06:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.028 06:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.028 06:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.287 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.287 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.287 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.287 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.287 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.287 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.287 { 00:14:58.287 "cntlid": 23, 00:14:58.287 "qid": 0, 00:14:58.287 "state": "enabled", 00:14:58.287 "thread": "nvmf_tgt_poll_group_000", 00:14:58.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:14:58.287 "listen_address": { 00:14:58.287 "trtype": "TCP", 00:14:58.287 "adrfam": "IPv4", 00:14:58.287 "traddr": "10.0.0.3", 00:14:58.287 "trsvcid": "4420" 00:14:58.287 }, 00:14:58.287 "peer_address": { 00:14:58.287 "trtype": "TCP", 00:14:58.287 "adrfam": "IPv4", 00:14:58.287 "traddr": "10.0.0.1", 00:14:58.287 "trsvcid": "55882" 00:14:58.287 }, 00:14:58.287 "auth": { 00:14:58.287 "state": "completed", 00:14:58.287 "digest": "sha256", 00:14:58.287 "dhgroup": "ffdhe3072" 00:14:58.287 } 00:14:58.287 } 00:14:58.287 ]' 00:14:58.287 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.287 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:58.287 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.287 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:58.287 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.287 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.287 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.287 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.546 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:14:58.546 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:14:59.481 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.481 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:14:59.481 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.481 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.481 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.481 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:59.481 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.481 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:59.481 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:59.481 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:59.481 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.481 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:59.481 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:59.481 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:59.481 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.481 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.481 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.481 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.740 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.740 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.740 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.740 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.999 00:14:59.999 06:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.999 06:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.999 06:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.259 06:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.259 06:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.259 06:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.259 06:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.259 06:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.259 06:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.259 { 00:15:00.259 "cntlid": 25, 00:15:00.259 "qid": 0, 00:15:00.259 "state": "enabled", 00:15:00.259 "thread": "nvmf_tgt_poll_group_000", 00:15:00.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:00.259 "listen_address": { 00:15:00.259 "trtype": "TCP", 00:15:00.259 "adrfam": "IPv4", 00:15:00.259 "traddr": "10.0.0.3", 00:15:00.259 "trsvcid": "4420" 00:15:00.259 }, 00:15:00.259 "peer_address": { 00:15:00.259 "trtype": "TCP", 00:15:00.259 "adrfam": "IPv4", 00:15:00.259 "traddr": "10.0.0.1", 00:15:00.259 "trsvcid": "55904" 00:15:00.259 }, 00:15:00.259 "auth": { 00:15:00.259 "state": "completed", 00:15:00.259 "digest": "sha256", 00:15:00.259 "dhgroup": "ffdhe4096" 00:15:00.259 } 00:15:00.259 } 00:15:00.259 ]' 00:15:00.259 06:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.517 06:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:00.517 06:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.517 06:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:00.517 06:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.517 06:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.517 06:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.517 06:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.776 06:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:15:00.776 06:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:15:01.342 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.601 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:01.601 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.601 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.601 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.601 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.601 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:01.601 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:01.860 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:01.860 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.860 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:01.860 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:01.860 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:01.860 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.860 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.860 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.860 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.860 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.860 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.860 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.860 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.118 00:15:02.118 06:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.118 06:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.118 06:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.376 06:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.376 06:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.376 06:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.376 06:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.376 06:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.376 06:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.376 { 00:15:02.376 "cntlid": 27, 00:15:02.376 "qid": 0, 00:15:02.376 "state": "enabled", 00:15:02.376 "thread": "nvmf_tgt_poll_group_000", 00:15:02.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:02.376 "listen_address": { 00:15:02.376 "trtype": "TCP", 00:15:02.376 "adrfam": "IPv4", 00:15:02.376 "traddr": "10.0.0.3", 00:15:02.376 "trsvcid": "4420" 00:15:02.376 }, 00:15:02.376 "peer_address": { 00:15:02.376 "trtype": "TCP", 00:15:02.376 "adrfam": "IPv4", 00:15:02.376 "traddr": "10.0.0.1", 00:15:02.376 "trsvcid": "45570" 00:15:02.376 }, 00:15:02.376 "auth": { 00:15:02.376 "state": "completed", 00:15:02.376 "digest": "sha256", 00:15:02.376 "dhgroup": "ffdhe4096" 00:15:02.376 } 00:15:02.376 } 00:15:02.376 ]' 00:15:02.376 06:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.633 06:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:02.633 06:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.633 06:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:02.633 06:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.633 06:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.633 06:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.633 06:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.891 06:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:15:02.891 06:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:15:03.827 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.827 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:03.827 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.827 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.827 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.827 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.827 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:03.827 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:03.827 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:03.827 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.827 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:03.827 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:03.827 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:03.827 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.827 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.827 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.827 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.827 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.827 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.827 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.827 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.396 00:15:04.396 06:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.396 06:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.396 06:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.656 06:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.656 06:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.656 06:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.656 06:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.656 06:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.656 06:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.656 { 00:15:04.656 "cntlid": 29, 00:15:04.656 "qid": 0, 00:15:04.656 "state": "enabled", 00:15:04.656 "thread": "nvmf_tgt_poll_group_000", 00:15:04.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:04.656 "listen_address": { 00:15:04.656 "trtype": "TCP", 00:15:04.656 "adrfam": "IPv4", 00:15:04.656 "traddr": "10.0.0.3", 00:15:04.656 "trsvcid": "4420" 00:15:04.656 }, 00:15:04.656 "peer_address": { 00:15:04.656 "trtype": "TCP", 00:15:04.656 "adrfam": "IPv4", 00:15:04.656 "traddr": "10.0.0.1", 00:15:04.656 "trsvcid": "45588" 00:15:04.656 }, 00:15:04.656 "auth": { 00:15:04.656 "state": "completed", 00:15:04.656 "digest": "sha256", 00:15:04.656 "dhgroup": "ffdhe4096" 00:15:04.656 } 00:15:04.656 } 00:15:04.656 ]' 00:15:04.656 06:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.656 06:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:04.656 06:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.656 06:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:04.656 06:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.656 06:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.656 06:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.656 06:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.915 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:15:04.915 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:15:05.851 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.851 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:05.851 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.851 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.851 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.851 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.851 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:05.852 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:05.852 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:05.852 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.852 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:05.852 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:05.852 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:05.852 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.852 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key3 00:15:05.852 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.852 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.110 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.110 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:06.110 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.110 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.369 00:15:06.369 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.369 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.369 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.628 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.628 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.628 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.628 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.628 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.628 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.628 { 00:15:06.628 "cntlid": 31, 00:15:06.628 "qid": 0, 00:15:06.628 "state": "enabled", 00:15:06.628 "thread": "nvmf_tgt_poll_group_000", 00:15:06.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:06.628 "listen_address": { 00:15:06.628 "trtype": "TCP", 00:15:06.628 "adrfam": "IPv4", 00:15:06.628 "traddr": "10.0.0.3", 00:15:06.628 "trsvcid": "4420" 00:15:06.628 }, 00:15:06.628 "peer_address": { 00:15:06.628 "trtype": "TCP", 00:15:06.628 "adrfam": "IPv4", 00:15:06.628 "traddr": "10.0.0.1", 00:15:06.628 "trsvcid": "45602" 00:15:06.628 }, 00:15:06.628 "auth": { 00:15:06.628 "state": "completed", 00:15:06.628 "digest": "sha256", 00:15:06.628 "dhgroup": "ffdhe4096" 00:15:06.628 } 00:15:06.628 } 00:15:06.628 ]' 00:15:06.628 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.628 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.628 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.628 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:06.628 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.887 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.887 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.887 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.146 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:15:07.146 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:15:07.714 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.714 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:07.714 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.714 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.714 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.714 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:07.714 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.714 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:07.714 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:08.282 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:08.282 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.282 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:08.282 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:08.282 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:08.282 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.282 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.282 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.282 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.282 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.282 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.282 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.282 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.540 00:15:08.540 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.540 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.540 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.897 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.897 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.897 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.897 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.897 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.897 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.897 { 00:15:08.897 "cntlid": 33, 00:15:08.897 "qid": 0, 00:15:08.897 "state": "enabled", 00:15:08.897 "thread": "nvmf_tgt_poll_group_000", 00:15:08.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:08.897 "listen_address": { 00:15:08.897 "trtype": "TCP", 00:15:08.897 "adrfam": "IPv4", 00:15:08.897 "traddr": "10.0.0.3", 00:15:08.897 "trsvcid": "4420" 00:15:08.897 }, 00:15:08.897 "peer_address": { 00:15:08.897 "trtype": "TCP", 00:15:08.897 "adrfam": "IPv4", 00:15:08.897 "traddr": "10.0.0.1", 00:15:08.897 "trsvcid": "45622" 00:15:08.897 }, 00:15:08.897 "auth": { 00:15:08.898 "state": "completed", 00:15:08.898 "digest": "sha256", 00:15:08.898 "dhgroup": "ffdhe6144" 00:15:08.898 } 00:15:08.898 } 00:15:08.898 ]' 00:15:08.898 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.898 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:08.898 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.898 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:08.898 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.898 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.898 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.898 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.185 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:15:09.185 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:15:09.752 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.752 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:09.752 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.752 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.752 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.752 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.752 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:09.752 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:10.011 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:10.011 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.011 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:10.011 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:10.011 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:10.011 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.011 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.011 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.011 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.011 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.011 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.011 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.011 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.577 00:15:10.577 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.577 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.577 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.835 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.835 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.835 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.835 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.835 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.835 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.835 { 00:15:10.835 "cntlid": 35, 00:15:10.835 "qid": 0, 00:15:10.835 "state": "enabled", 00:15:10.835 "thread": "nvmf_tgt_poll_group_000", 00:15:10.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:10.835 "listen_address": { 00:15:10.835 "trtype": "TCP", 00:15:10.835 "adrfam": "IPv4", 00:15:10.835 "traddr": "10.0.0.3", 00:15:10.835 "trsvcid": "4420" 00:15:10.835 }, 00:15:10.835 "peer_address": { 00:15:10.835 "trtype": "TCP", 00:15:10.835 "adrfam": "IPv4", 00:15:10.835 "traddr": "10.0.0.1", 00:15:10.835 "trsvcid": "33292" 00:15:10.835 }, 00:15:10.835 "auth": { 00:15:10.835 "state": "completed", 00:15:10.835 "digest": "sha256", 00:15:10.835 "dhgroup": "ffdhe6144" 00:15:10.836 } 00:15:10.836 } 00:15:10.836 ]' 00:15:10.836 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.094 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:11.094 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.094 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:11.094 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.094 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.094 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.094 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.353 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:15:11.353 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:15:11.919 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.919 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:11.919 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.919 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.919 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.919 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.919 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:11.919 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:12.178 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:12.178 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.178 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:12.178 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:12.178 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:12.178 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.178 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.178 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.178 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.178 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.178 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.178 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.178 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.745 00:15:12.745 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.745 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.745 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.004 06:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.004 06:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.004 06:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.004 06:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.004 06:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.004 06:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.004 { 00:15:13.004 "cntlid": 37, 00:15:13.004 "qid": 0, 00:15:13.004 "state": "enabled", 00:15:13.004 "thread": "nvmf_tgt_poll_group_000", 00:15:13.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:13.004 "listen_address": { 00:15:13.004 "trtype": "TCP", 00:15:13.004 "adrfam": "IPv4", 00:15:13.004 "traddr": "10.0.0.3", 00:15:13.004 "trsvcid": "4420" 00:15:13.004 }, 00:15:13.004 "peer_address": { 00:15:13.004 "trtype": "TCP", 00:15:13.004 "adrfam": "IPv4", 00:15:13.004 "traddr": "10.0.0.1", 00:15:13.004 "trsvcid": "33322" 00:15:13.004 }, 00:15:13.004 "auth": { 00:15:13.004 "state": "completed", 00:15:13.004 "digest": "sha256", 00:15:13.004 "dhgroup": "ffdhe6144" 00:15:13.004 } 00:15:13.004 } 00:15:13.004 ]' 00:15:13.004 06:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.004 06:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:13.004 06:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.262 06:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:13.262 06:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.262 06:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.262 06:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.262 06:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.521 06:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:15:13.521 06:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:15:14.087 06:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.087 06:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:14.087 06:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.087 06:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.087 06:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.087 06:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.087 06:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:14.087 06:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:14.345 06:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:14.345 06:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.345 06:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:14.345 06:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:14.345 06:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:14.345 06:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.345 06:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key3 00:15:14.345 06:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.345 06:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.345 06:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.345 06:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:14.345 06:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:14.345 06:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:14.911 00:15:14.911 06:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.911 06:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.911 06:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.169 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.169 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.169 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.169 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.169 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.169 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.169 { 00:15:15.169 "cntlid": 39, 00:15:15.169 "qid": 0, 00:15:15.169 "state": "enabled", 00:15:15.169 "thread": "nvmf_tgt_poll_group_000", 00:15:15.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:15.169 "listen_address": { 00:15:15.169 "trtype": "TCP", 00:15:15.169 "adrfam": "IPv4", 00:15:15.169 "traddr": "10.0.0.3", 00:15:15.169 "trsvcid": "4420" 00:15:15.169 }, 00:15:15.169 "peer_address": { 00:15:15.169 "trtype": "TCP", 00:15:15.169 "adrfam": "IPv4", 00:15:15.169 "traddr": "10.0.0.1", 00:15:15.169 "trsvcid": "33346" 00:15:15.169 }, 00:15:15.169 "auth": { 00:15:15.169 "state": "completed", 00:15:15.169 "digest": "sha256", 00:15:15.169 "dhgroup": "ffdhe6144" 00:15:15.169 } 00:15:15.169 } 00:15:15.169 ]' 00:15:15.170 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.170 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:15.170 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.170 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:15.170 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.170 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.170 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.170 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.429 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:15:15.429 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:15:16.362 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.362 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:16.362 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.362 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.363 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.363 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:16.363 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.363 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:16.363 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:16.620 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:16.620 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.620 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:16.620 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:16.620 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:16.620 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.620 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.620 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.620 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.620 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.620 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.620 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.620 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.187 00:15:17.187 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.187 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.187 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.446 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.446 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.446 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.446 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.446 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.446 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.446 { 00:15:17.446 "cntlid": 41, 00:15:17.446 "qid": 0, 00:15:17.446 "state": "enabled", 00:15:17.446 "thread": "nvmf_tgt_poll_group_000", 00:15:17.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:17.446 "listen_address": { 00:15:17.446 "trtype": "TCP", 00:15:17.446 "adrfam": "IPv4", 00:15:17.446 "traddr": "10.0.0.3", 00:15:17.446 "trsvcid": "4420" 00:15:17.446 }, 00:15:17.446 "peer_address": { 00:15:17.446 "trtype": "TCP", 00:15:17.446 "adrfam": "IPv4", 00:15:17.446 "traddr": "10.0.0.1", 00:15:17.446 "trsvcid": "33380" 00:15:17.446 }, 00:15:17.446 "auth": { 00:15:17.446 "state": "completed", 00:15:17.446 "digest": "sha256", 00:15:17.446 "dhgroup": "ffdhe8192" 00:15:17.446 } 00:15:17.446 } 00:15:17.446 ]' 00:15:17.446 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.446 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.446 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.705 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:17.705 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.705 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.705 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.705 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.964 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:15:17.964 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:15:18.551 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.551 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:18.551 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.551 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.551 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.551 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.551 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:18.551 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:18.810 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:18.810 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.810 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:18.810 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:18.810 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:18.810 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.810 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.810 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.810 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.810 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.810 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.810 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.810 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.377 00:15:19.377 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.377 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.377 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.636 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.636 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.636 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.636 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.636 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.636 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.636 { 00:15:19.636 "cntlid": 43, 00:15:19.636 "qid": 0, 00:15:19.636 "state": "enabled", 00:15:19.636 "thread": "nvmf_tgt_poll_group_000", 00:15:19.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:19.636 "listen_address": { 00:15:19.636 "trtype": "TCP", 00:15:19.636 "adrfam": "IPv4", 00:15:19.636 "traddr": "10.0.0.3", 00:15:19.636 "trsvcid": "4420" 00:15:19.636 }, 00:15:19.636 "peer_address": { 00:15:19.636 "trtype": "TCP", 00:15:19.636 "adrfam": "IPv4", 00:15:19.636 "traddr": "10.0.0.1", 00:15:19.636 "trsvcid": "33410" 00:15:19.636 }, 00:15:19.636 "auth": { 00:15:19.636 "state": "completed", 00:15:19.636 "digest": "sha256", 00:15:19.636 "dhgroup": "ffdhe8192" 00:15:19.636 } 00:15:19.636 } 00:15:19.636 ]' 00:15:19.636 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.636 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.895 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.895 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:19.895 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.895 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.895 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.895 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.154 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:15:20.154 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:15:20.721 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.721 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:20.721 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.721 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.721 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.721 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.721 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:20.721 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:21.289 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:21.289 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.290 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:21.290 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:21.290 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:21.290 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.290 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.290 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.290 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.290 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.290 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.290 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.290 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.857 00:15:21.857 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.857 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.857 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.115 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.116 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.116 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.116 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.116 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.116 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.116 { 00:15:22.116 "cntlid": 45, 00:15:22.116 "qid": 0, 00:15:22.116 "state": "enabled", 00:15:22.116 "thread": "nvmf_tgt_poll_group_000", 00:15:22.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:22.116 "listen_address": { 00:15:22.116 "trtype": "TCP", 00:15:22.116 "adrfam": "IPv4", 00:15:22.116 "traddr": "10.0.0.3", 00:15:22.116 "trsvcid": "4420" 00:15:22.116 }, 00:15:22.116 "peer_address": { 00:15:22.116 "trtype": "TCP", 00:15:22.116 "adrfam": "IPv4", 00:15:22.116 "traddr": "10.0.0.1", 00:15:22.116 "trsvcid": "58662" 00:15:22.116 }, 00:15:22.116 "auth": { 00:15:22.116 "state": "completed", 00:15:22.116 "digest": "sha256", 00:15:22.116 "dhgroup": "ffdhe8192" 00:15:22.116 } 00:15:22.116 } 00:15:22.116 ]' 00:15:22.116 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.116 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.116 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.116 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:22.116 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.116 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.116 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.116 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.375 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:15:22.375 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:15:23.310 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.310 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:23.310 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.310 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.310 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.310 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.310 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:23.310 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:23.569 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:23.569 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.569 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.570 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:23.570 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:23.570 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.570 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key3 00:15:23.570 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.570 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.570 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.570 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:23.570 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:23.570 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:24.137 00:15:24.137 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.137 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.137 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.396 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.396 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.396 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.396 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.396 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.396 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.396 { 00:15:24.396 "cntlid": 47, 00:15:24.396 "qid": 0, 00:15:24.396 "state": "enabled", 00:15:24.396 "thread": "nvmf_tgt_poll_group_000", 00:15:24.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:24.396 "listen_address": { 00:15:24.396 "trtype": "TCP", 00:15:24.396 "adrfam": "IPv4", 00:15:24.396 "traddr": "10.0.0.3", 00:15:24.396 "trsvcid": "4420" 00:15:24.396 }, 00:15:24.396 "peer_address": { 00:15:24.396 "trtype": "TCP", 00:15:24.396 "adrfam": "IPv4", 00:15:24.396 "traddr": "10.0.0.1", 00:15:24.396 "trsvcid": "58698" 00:15:24.396 }, 00:15:24.396 "auth": { 00:15:24.396 "state": "completed", 00:15:24.396 "digest": "sha256", 00:15:24.396 "dhgroup": "ffdhe8192" 00:15:24.396 } 00:15:24.396 } 00:15:24.396 ]' 00:15:24.396 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.396 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.396 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.655 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:24.655 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.655 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.655 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.655 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.913 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:15:24.913 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:15:25.480 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.480 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:25.480 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.480 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.480 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.480 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:25.480 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:25.480 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.480 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:25.480 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:25.738 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:25.739 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.739 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:25.739 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:25.739 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:25.739 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.739 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.739 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.739 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.739 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.739 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.739 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.739 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.381 00:15:26.381 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.381 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.381 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.381 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.381 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.381 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.381 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.381 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.381 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.381 { 00:15:26.381 "cntlid": 49, 00:15:26.381 "qid": 0, 00:15:26.381 "state": "enabled", 00:15:26.381 "thread": "nvmf_tgt_poll_group_000", 00:15:26.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:26.381 "listen_address": { 00:15:26.381 "trtype": "TCP", 00:15:26.381 "adrfam": "IPv4", 00:15:26.381 "traddr": "10.0.0.3", 00:15:26.381 "trsvcid": "4420" 00:15:26.381 }, 00:15:26.381 "peer_address": { 00:15:26.381 "trtype": "TCP", 00:15:26.381 "adrfam": "IPv4", 00:15:26.381 "traddr": "10.0.0.1", 00:15:26.381 "trsvcid": "58742" 00:15:26.381 }, 00:15:26.381 "auth": { 00:15:26.381 "state": "completed", 00:15:26.381 "digest": "sha384", 00:15:26.381 "dhgroup": "null" 00:15:26.381 } 00:15:26.381 } 00:15:26.381 ]' 00:15:26.381 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.640 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.640 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.640 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:26.640 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.640 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.640 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.640 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.899 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:15:26.899 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:15:27.466 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.466 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:27.466 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.466 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.466 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.466 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.466 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:27.466 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:27.724 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:27.724 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.724 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:27.724 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:27.724 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:27.724 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.724 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.724 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.724 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.724 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.724 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.724 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.724 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.289 00:15:28.289 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.289 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.289 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.289 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.289 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.289 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.289 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.547 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.547 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.547 { 00:15:28.547 "cntlid": 51, 00:15:28.547 "qid": 0, 00:15:28.547 "state": "enabled", 00:15:28.547 "thread": "nvmf_tgt_poll_group_000", 00:15:28.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:28.547 "listen_address": { 00:15:28.547 "trtype": "TCP", 00:15:28.547 "adrfam": "IPv4", 00:15:28.547 "traddr": "10.0.0.3", 00:15:28.547 "trsvcid": "4420" 00:15:28.547 }, 00:15:28.547 "peer_address": { 00:15:28.547 "trtype": "TCP", 00:15:28.547 "adrfam": "IPv4", 00:15:28.547 "traddr": "10.0.0.1", 00:15:28.547 "trsvcid": "58754" 00:15:28.547 }, 00:15:28.547 "auth": { 00:15:28.547 "state": "completed", 00:15:28.547 "digest": "sha384", 00:15:28.547 "dhgroup": "null" 00:15:28.547 } 00:15:28.547 } 00:15:28.547 ]' 00:15:28.547 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.547 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.547 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.547 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:28.547 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.547 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.547 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.547 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.804 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:15:28.804 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:15:29.738 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.738 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:29.738 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.738 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.738 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.738 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.738 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:29.738 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:29.738 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:29.738 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.738 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:29.738 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:29.738 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:29.738 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.738 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.738 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.738 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.738 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.738 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.738 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.738 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.996 00:15:30.254 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.254 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.254 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.512 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.512 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.512 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.512 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.512 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.512 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.512 { 00:15:30.512 "cntlid": 53, 00:15:30.512 "qid": 0, 00:15:30.512 "state": "enabled", 00:15:30.512 "thread": "nvmf_tgt_poll_group_000", 00:15:30.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:30.512 "listen_address": { 00:15:30.512 "trtype": "TCP", 00:15:30.512 "adrfam": "IPv4", 00:15:30.512 "traddr": "10.0.0.3", 00:15:30.512 "trsvcid": "4420" 00:15:30.512 }, 00:15:30.512 "peer_address": { 00:15:30.512 "trtype": "TCP", 00:15:30.512 "adrfam": "IPv4", 00:15:30.512 "traddr": "10.0.0.1", 00:15:30.512 "trsvcid": "59338" 00:15:30.512 }, 00:15:30.512 "auth": { 00:15:30.512 "state": "completed", 00:15:30.512 "digest": "sha384", 00:15:30.512 "dhgroup": "null" 00:15:30.512 } 00:15:30.512 } 00:15:30.512 ]' 00:15:30.512 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.512 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.512 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.513 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:30.513 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.771 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.771 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.771 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.030 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:15:31.030 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:15:31.597 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.597 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:31.597 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.597 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.597 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.597 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.597 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:31.597 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:31.856 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:31.856 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.856 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:31.856 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:31.856 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:31.856 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.856 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key3 00:15:31.856 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.856 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.856 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.856 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:31.856 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.856 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.115 00:15:32.115 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.115 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.115 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.384 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.384 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.384 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.384 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.384 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.384 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.384 { 00:15:32.384 "cntlid": 55, 00:15:32.384 "qid": 0, 00:15:32.384 "state": "enabled", 00:15:32.384 "thread": "nvmf_tgt_poll_group_000", 00:15:32.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:32.384 "listen_address": { 00:15:32.384 "trtype": "TCP", 00:15:32.384 "adrfam": "IPv4", 00:15:32.384 "traddr": "10.0.0.3", 00:15:32.384 "trsvcid": "4420" 00:15:32.384 }, 00:15:32.384 "peer_address": { 00:15:32.384 "trtype": "TCP", 00:15:32.384 "adrfam": "IPv4", 00:15:32.384 "traddr": "10.0.0.1", 00:15:32.384 "trsvcid": "59362" 00:15:32.385 }, 00:15:32.385 "auth": { 00:15:32.385 "state": "completed", 00:15:32.385 "digest": "sha384", 00:15:32.385 "dhgroup": "null" 00:15:32.385 } 00:15:32.385 } 00:15:32.385 ]' 00:15:32.385 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.385 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.385 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.385 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:32.385 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.385 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.385 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.385 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.991 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:15:32.991 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:15:33.559 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.559 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:33.559 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.559 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.559 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.559 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.559 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.559 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:33.559 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:33.818 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:33.818 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.818 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:33.818 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:33.818 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:33.818 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.818 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.818 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.818 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.818 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.818 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.818 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.818 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.077 00:15:34.077 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.077 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.077 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.336 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.336 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.337 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.337 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.337 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.337 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.337 { 00:15:34.337 "cntlid": 57, 00:15:34.337 "qid": 0, 00:15:34.337 "state": "enabled", 00:15:34.337 "thread": "nvmf_tgt_poll_group_000", 00:15:34.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:34.337 "listen_address": { 00:15:34.337 "trtype": "TCP", 00:15:34.337 "adrfam": "IPv4", 00:15:34.337 "traddr": "10.0.0.3", 00:15:34.337 "trsvcid": "4420" 00:15:34.337 }, 00:15:34.337 "peer_address": { 00:15:34.337 "trtype": "TCP", 00:15:34.337 "adrfam": "IPv4", 00:15:34.337 "traddr": "10.0.0.1", 00:15:34.337 "trsvcid": "59388" 00:15:34.337 }, 00:15:34.337 "auth": { 00:15:34.337 "state": "completed", 00:15:34.337 "digest": "sha384", 00:15:34.337 "dhgroup": "ffdhe2048" 00:15:34.337 } 00:15:34.337 } 00:15:34.337 ]' 00:15:34.337 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.337 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.337 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.337 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:34.337 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.595 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.595 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.595 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.854 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:15:34.854 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:15:35.422 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.422 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:35.422 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.422 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.422 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.423 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.423 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:35.423 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:35.682 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:35.682 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.682 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:35.682 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:35.682 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:35.682 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.682 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.682 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.682 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.682 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.682 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.682 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.682 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.941 00:15:35.941 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.941 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.941 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.200 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.200 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.200 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.201 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.201 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.201 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.201 { 00:15:36.201 "cntlid": 59, 00:15:36.201 "qid": 0, 00:15:36.201 "state": "enabled", 00:15:36.201 "thread": "nvmf_tgt_poll_group_000", 00:15:36.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:36.201 "listen_address": { 00:15:36.201 "trtype": "TCP", 00:15:36.201 "adrfam": "IPv4", 00:15:36.201 "traddr": "10.0.0.3", 00:15:36.201 "trsvcid": "4420" 00:15:36.201 }, 00:15:36.201 "peer_address": { 00:15:36.201 "trtype": "TCP", 00:15:36.201 "adrfam": "IPv4", 00:15:36.201 "traddr": "10.0.0.1", 00:15:36.201 "trsvcid": "59426" 00:15:36.201 }, 00:15:36.201 "auth": { 00:15:36.201 "state": "completed", 00:15:36.201 "digest": "sha384", 00:15:36.201 "dhgroup": "ffdhe2048" 00:15:36.201 } 00:15:36.201 } 00:15:36.201 ]' 00:15:36.201 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.201 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.201 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.460 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:36.460 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.460 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.460 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.460 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.727 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:15:36.727 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:15:37.294 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.294 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:37.294 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.294 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.294 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.294 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.294 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:37.294 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:37.553 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:37.553 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.553 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:37.553 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:37.553 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:37.553 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.553 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.553 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.553 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.553 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.553 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.553 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.553 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.120 00:15:38.120 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.120 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.120 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.120 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.120 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.120 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.120 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.120 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.390 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.390 { 00:15:38.390 "cntlid": 61, 00:15:38.390 "qid": 0, 00:15:38.390 "state": "enabled", 00:15:38.390 "thread": "nvmf_tgt_poll_group_000", 00:15:38.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:38.390 "listen_address": { 00:15:38.390 "trtype": "TCP", 00:15:38.390 "adrfam": "IPv4", 00:15:38.390 "traddr": "10.0.0.3", 00:15:38.390 "trsvcid": "4420" 00:15:38.390 }, 00:15:38.390 "peer_address": { 00:15:38.390 "trtype": "TCP", 00:15:38.390 "adrfam": "IPv4", 00:15:38.390 "traddr": "10.0.0.1", 00:15:38.390 "trsvcid": "59454" 00:15:38.390 }, 00:15:38.390 "auth": { 00:15:38.390 "state": "completed", 00:15:38.390 "digest": "sha384", 00:15:38.390 "dhgroup": "ffdhe2048" 00:15:38.390 } 00:15:38.390 } 00:15:38.390 ]' 00:15:38.390 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.390 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.390 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.390 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:38.390 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.390 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.390 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.390 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.650 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:15:38.650 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:15:39.587 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.587 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:39.587 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.587 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.587 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.587 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.587 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:39.587 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:39.587 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:39.587 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.587 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:39.587 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:39.587 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:39.587 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.587 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key3 00:15:39.587 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.587 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.587 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.587 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:39.587 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:39.587 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:40.155 00:15:40.155 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.155 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.155 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.414 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.414 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.415 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.415 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.415 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.415 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.415 { 00:15:40.415 "cntlid": 63, 00:15:40.415 "qid": 0, 00:15:40.415 "state": "enabled", 00:15:40.415 "thread": "nvmf_tgt_poll_group_000", 00:15:40.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:40.415 "listen_address": { 00:15:40.415 "trtype": "TCP", 00:15:40.415 "adrfam": "IPv4", 00:15:40.415 "traddr": "10.0.0.3", 00:15:40.415 "trsvcid": "4420" 00:15:40.415 }, 00:15:40.415 "peer_address": { 00:15:40.415 "trtype": "TCP", 00:15:40.415 "adrfam": "IPv4", 00:15:40.415 "traddr": "10.0.0.1", 00:15:40.415 "trsvcid": "59488" 00:15:40.415 }, 00:15:40.415 "auth": { 00:15:40.415 "state": "completed", 00:15:40.415 "digest": "sha384", 00:15:40.415 "dhgroup": "ffdhe2048" 00:15:40.415 } 00:15:40.415 } 00:15:40.415 ]' 00:15:40.415 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.415 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:40.415 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.415 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:40.415 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.415 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.415 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.415 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.983 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:15:40.983 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:15:41.550 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.550 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:41.550 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.550 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.550 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.550 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:41.550 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.550 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:41.551 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:41.809 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:41.809 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.809 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:41.809 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:41.809 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:41.810 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.810 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.810 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.810 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.810 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.810 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.810 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.810 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.069 00:15:42.069 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.069 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.069 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.327 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.327 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.328 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.328 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.328 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.328 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.328 { 00:15:42.328 "cntlid": 65, 00:15:42.328 "qid": 0, 00:15:42.328 "state": "enabled", 00:15:42.328 "thread": "nvmf_tgt_poll_group_000", 00:15:42.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:42.328 "listen_address": { 00:15:42.328 "trtype": "TCP", 00:15:42.328 "adrfam": "IPv4", 00:15:42.328 "traddr": "10.0.0.3", 00:15:42.328 "trsvcid": "4420" 00:15:42.328 }, 00:15:42.328 "peer_address": { 00:15:42.328 "trtype": "TCP", 00:15:42.328 "adrfam": "IPv4", 00:15:42.328 "traddr": "10.0.0.1", 00:15:42.328 "trsvcid": "49864" 00:15:42.328 }, 00:15:42.328 "auth": { 00:15:42.328 "state": "completed", 00:15:42.328 "digest": "sha384", 00:15:42.328 "dhgroup": "ffdhe3072" 00:15:42.328 } 00:15:42.328 } 00:15:42.328 ]' 00:15:42.328 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.328 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.328 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.586 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:42.586 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.586 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.586 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.586 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.845 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:15:42.845 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:15:43.412 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.412 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:43.412 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.412 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.412 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.412 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.412 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:43.412 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:43.671 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:43.671 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.671 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:43.671 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:43.671 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:43.671 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.671 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.671 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.671 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.671 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.671 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.671 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.672 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.239 00:15:44.239 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.239 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.239 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.496 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.496 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.496 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.497 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.497 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.497 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.497 { 00:15:44.497 "cntlid": 67, 00:15:44.497 "qid": 0, 00:15:44.497 "state": "enabled", 00:15:44.497 "thread": "nvmf_tgt_poll_group_000", 00:15:44.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:44.497 "listen_address": { 00:15:44.497 "trtype": "TCP", 00:15:44.497 "adrfam": "IPv4", 00:15:44.497 "traddr": "10.0.0.3", 00:15:44.497 "trsvcid": "4420" 00:15:44.497 }, 00:15:44.497 "peer_address": { 00:15:44.497 "trtype": "TCP", 00:15:44.497 "adrfam": "IPv4", 00:15:44.497 "traddr": "10.0.0.1", 00:15:44.497 "trsvcid": "49880" 00:15:44.497 }, 00:15:44.497 "auth": { 00:15:44.497 "state": "completed", 00:15:44.497 "digest": "sha384", 00:15:44.497 "dhgroup": "ffdhe3072" 00:15:44.497 } 00:15:44.497 } 00:15:44.497 ]' 00:15:44.497 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.497 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.497 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.497 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:44.497 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.755 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.755 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.755 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.013 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:15:45.013 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:15:45.625 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.625 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:45.625 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.625 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.625 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.625 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.625 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:45.625 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:45.883 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:45.883 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.883 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:45.883 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:45.883 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:45.883 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.883 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.883 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.883 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.883 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.883 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.884 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.884 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.141 00:15:46.141 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.141 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.141 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.709 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.709 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.709 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.709 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.709 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.709 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.709 { 00:15:46.709 "cntlid": 69, 00:15:46.709 "qid": 0, 00:15:46.709 "state": "enabled", 00:15:46.709 "thread": "nvmf_tgt_poll_group_000", 00:15:46.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:46.709 "listen_address": { 00:15:46.709 "trtype": "TCP", 00:15:46.709 "adrfam": "IPv4", 00:15:46.709 "traddr": "10.0.0.3", 00:15:46.709 "trsvcid": "4420" 00:15:46.709 }, 00:15:46.709 "peer_address": { 00:15:46.709 "trtype": "TCP", 00:15:46.709 "adrfam": "IPv4", 00:15:46.709 "traddr": "10.0.0.1", 00:15:46.709 "trsvcid": "49916" 00:15:46.709 }, 00:15:46.709 "auth": { 00:15:46.709 "state": "completed", 00:15:46.709 "digest": "sha384", 00:15:46.709 "dhgroup": "ffdhe3072" 00:15:46.709 } 00:15:46.709 } 00:15:46.709 ]' 00:15:46.710 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.710 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.710 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.710 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:46.710 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.710 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.710 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.710 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.969 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:15:46.969 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:15:47.537 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.537 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:47.537 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.537 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.537 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.537 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.537 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:47.537 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:47.795 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:47.795 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.795 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:47.795 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:47.795 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:47.795 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.795 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key3 00:15:47.795 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.795 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.795 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.795 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:47.795 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.795 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.362 00:15:48.362 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.362 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.362 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.621 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.621 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.621 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.621 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.621 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.621 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.621 { 00:15:48.621 "cntlid": 71, 00:15:48.621 "qid": 0, 00:15:48.621 "state": "enabled", 00:15:48.621 "thread": "nvmf_tgt_poll_group_000", 00:15:48.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:48.621 "listen_address": { 00:15:48.621 "trtype": "TCP", 00:15:48.621 "adrfam": "IPv4", 00:15:48.621 "traddr": "10.0.0.3", 00:15:48.621 "trsvcid": "4420" 00:15:48.621 }, 00:15:48.621 "peer_address": { 00:15:48.621 "trtype": "TCP", 00:15:48.621 "adrfam": "IPv4", 00:15:48.621 "traddr": "10.0.0.1", 00:15:48.621 "trsvcid": "49938" 00:15:48.621 }, 00:15:48.621 "auth": { 00:15:48.621 "state": "completed", 00:15:48.621 "digest": "sha384", 00:15:48.621 "dhgroup": "ffdhe3072" 00:15:48.621 } 00:15:48.621 } 00:15:48.621 ]' 00:15:48.621 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.621 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.621 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.621 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:48.621 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.621 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.621 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.621 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.880 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:15:48.880 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:15:49.816 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.816 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:49.816 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.816 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.816 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.816 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:49.816 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.816 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:49.816 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:49.816 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:49.816 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.816 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:49.816 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:49.816 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:49.816 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.816 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.816 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.816 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.816 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.816 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.816 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.816 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.384 00:15:50.384 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.384 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.384 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.642 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.642 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.642 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.642 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.642 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.642 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.642 { 00:15:50.642 "cntlid": 73, 00:15:50.642 "qid": 0, 00:15:50.642 "state": "enabled", 00:15:50.642 "thread": "nvmf_tgt_poll_group_000", 00:15:50.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:50.642 "listen_address": { 00:15:50.642 "trtype": "TCP", 00:15:50.642 "adrfam": "IPv4", 00:15:50.642 "traddr": "10.0.0.3", 00:15:50.642 "trsvcid": "4420" 00:15:50.642 }, 00:15:50.642 "peer_address": { 00:15:50.642 "trtype": "TCP", 00:15:50.642 "adrfam": "IPv4", 00:15:50.642 "traddr": "10.0.0.1", 00:15:50.642 "trsvcid": "48136" 00:15:50.642 }, 00:15:50.642 "auth": { 00:15:50.642 "state": "completed", 00:15:50.642 "digest": "sha384", 00:15:50.642 "dhgroup": "ffdhe4096" 00:15:50.643 } 00:15:50.643 } 00:15:50.643 ]' 00:15:50.643 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.643 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:50.643 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.643 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:50.643 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.643 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.643 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.643 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.210 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:15:51.210 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:15:51.778 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.778 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:51.778 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.778 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.778 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.778 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.778 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:51.778 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:52.036 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:52.036 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.036 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:52.036 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:52.036 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:52.036 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.036 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.036 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.036 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.036 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.036 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.036 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.037 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.602 00:15:52.602 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.602 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.602 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.860 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.860 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.860 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.860 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.860 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.860 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.860 { 00:15:52.860 "cntlid": 75, 00:15:52.860 "qid": 0, 00:15:52.860 "state": "enabled", 00:15:52.860 "thread": "nvmf_tgt_poll_group_000", 00:15:52.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:52.860 "listen_address": { 00:15:52.860 "trtype": "TCP", 00:15:52.860 "adrfam": "IPv4", 00:15:52.860 "traddr": "10.0.0.3", 00:15:52.860 "trsvcid": "4420" 00:15:52.860 }, 00:15:52.860 "peer_address": { 00:15:52.860 "trtype": "TCP", 00:15:52.860 "adrfam": "IPv4", 00:15:52.860 "traddr": "10.0.0.1", 00:15:52.860 "trsvcid": "48168" 00:15:52.860 }, 00:15:52.860 "auth": { 00:15:52.860 "state": "completed", 00:15:52.860 "digest": "sha384", 00:15:52.860 "dhgroup": "ffdhe4096" 00:15:52.860 } 00:15:52.860 } 00:15:52.860 ]' 00:15:52.860 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.860 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:52.860 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.860 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:52.860 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.860 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.860 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.860 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.119 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:15:53.119 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:15:53.686 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.686 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:53.686 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.686 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.686 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.686 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.686 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:53.686 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:53.945 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:53.945 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.945 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:53.945 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:53.945 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:53.945 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.945 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.945 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.945 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.945 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.945 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.945 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.945 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.512 00:15:54.512 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.512 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.512 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.512 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.512 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.512 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.512 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.512 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.512 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.512 { 00:15:54.512 "cntlid": 77, 00:15:54.512 "qid": 0, 00:15:54.512 "state": "enabled", 00:15:54.512 "thread": "nvmf_tgt_poll_group_000", 00:15:54.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:54.512 "listen_address": { 00:15:54.512 "trtype": "TCP", 00:15:54.512 "adrfam": "IPv4", 00:15:54.512 "traddr": "10.0.0.3", 00:15:54.512 "trsvcid": "4420" 00:15:54.512 }, 00:15:54.512 "peer_address": { 00:15:54.512 "trtype": "TCP", 00:15:54.512 "adrfam": "IPv4", 00:15:54.512 "traddr": "10.0.0.1", 00:15:54.512 "trsvcid": "48190" 00:15:54.512 }, 00:15:54.512 "auth": { 00:15:54.512 "state": "completed", 00:15:54.512 "digest": "sha384", 00:15:54.512 "dhgroup": "ffdhe4096" 00:15:54.512 } 00:15:54.512 } 00:15:54.512 ]' 00:15:54.512 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.770 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.770 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.770 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:54.770 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.770 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.770 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.770 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.029 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:15:55.029 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:15:55.597 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.597 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:55.597 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.597 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.597 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.597 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.597 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:55.597 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:55.856 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:55.856 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.856 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:55.856 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:55.856 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:55.856 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.856 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key3 00:15:55.856 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.856 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.856 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.856 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:55.856 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.856 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:56.424 00:15:56.424 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.424 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.424 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.683 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.683 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.683 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.683 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.683 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.683 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.683 { 00:15:56.683 "cntlid": 79, 00:15:56.683 "qid": 0, 00:15:56.683 "state": "enabled", 00:15:56.683 "thread": "nvmf_tgt_poll_group_000", 00:15:56.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:56.684 "listen_address": { 00:15:56.684 "trtype": "TCP", 00:15:56.684 "adrfam": "IPv4", 00:15:56.684 "traddr": "10.0.0.3", 00:15:56.684 "trsvcid": "4420" 00:15:56.684 }, 00:15:56.684 "peer_address": { 00:15:56.684 "trtype": "TCP", 00:15:56.684 "adrfam": "IPv4", 00:15:56.684 "traddr": "10.0.0.1", 00:15:56.684 "trsvcid": "48218" 00:15:56.684 }, 00:15:56.684 "auth": { 00:15:56.684 "state": "completed", 00:15:56.684 "digest": "sha384", 00:15:56.684 "dhgroup": "ffdhe4096" 00:15:56.684 } 00:15:56.684 } 00:15:56.684 ]' 00:15:56.684 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.684 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.684 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.684 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:56.684 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.684 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.684 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.684 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.943 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:15:56.943 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:15:57.880 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.880 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:57.880 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.880 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.880 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.880 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.880 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.880 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:57.880 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:57.880 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:57.880 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.880 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:57.880 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:57.880 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:57.880 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.880 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.880 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.880 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.880 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.880 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.880 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.880 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.448 00:15:58.448 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.448 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.448 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.707 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.707 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.707 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.707 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.707 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.707 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.707 { 00:15:58.707 "cntlid": 81, 00:15:58.707 "qid": 0, 00:15:58.707 "state": "enabled", 00:15:58.707 "thread": "nvmf_tgt_poll_group_000", 00:15:58.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:15:58.707 "listen_address": { 00:15:58.707 "trtype": "TCP", 00:15:58.707 "adrfam": "IPv4", 00:15:58.707 "traddr": "10.0.0.3", 00:15:58.707 "trsvcid": "4420" 00:15:58.707 }, 00:15:58.707 "peer_address": { 00:15:58.707 "trtype": "TCP", 00:15:58.707 "adrfam": "IPv4", 00:15:58.707 "traddr": "10.0.0.1", 00:15:58.707 "trsvcid": "48238" 00:15:58.707 }, 00:15:58.707 "auth": { 00:15:58.707 "state": "completed", 00:15:58.707 "digest": "sha384", 00:15:58.707 "dhgroup": "ffdhe6144" 00:15:58.707 } 00:15:58.707 } 00:15:58.707 ]' 00:15:58.707 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.707 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.707 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.707 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:58.707 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.967 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.967 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.967 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.967 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:15:58.967 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:15:59.906 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.906 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:15:59.906 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.906 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.906 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.906 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.906 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:59.906 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:00.165 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:00.166 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.166 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:00.166 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:00.166 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:00.166 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.166 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.166 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.166 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.166 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.166 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.166 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.166 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.735 00:16:00.735 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.735 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.735 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.735 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.735 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.735 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.735 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.735 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.735 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.735 { 00:16:00.735 "cntlid": 83, 00:16:00.735 "qid": 0, 00:16:00.735 "state": "enabled", 00:16:00.735 "thread": "nvmf_tgt_poll_group_000", 00:16:00.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:00.735 "listen_address": { 00:16:00.735 "trtype": "TCP", 00:16:00.735 "adrfam": "IPv4", 00:16:00.735 "traddr": "10.0.0.3", 00:16:00.735 "trsvcid": "4420" 00:16:00.735 }, 00:16:00.735 "peer_address": { 00:16:00.735 "trtype": "TCP", 00:16:00.735 "adrfam": "IPv4", 00:16:00.735 "traddr": "10.0.0.1", 00:16:00.735 "trsvcid": "46892" 00:16:00.735 }, 00:16:00.735 "auth": { 00:16:00.735 "state": "completed", 00:16:00.735 "digest": "sha384", 00:16:00.735 "dhgroup": "ffdhe6144" 00:16:00.735 } 00:16:00.735 } 00:16:00.735 ]' 00:16:00.735 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.995 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.995 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.995 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:00.995 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.995 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.995 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.995 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.254 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:16:01.254 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:16:01.822 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.822 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:01.822 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.822 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.081 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.081 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.081 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:02.081 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:02.340 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:02.340 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.340 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:02.340 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:02.340 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:02.340 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.340 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.340 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.340 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.340 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.340 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.340 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.340 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.599 00:16:02.599 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.599 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.599 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.858 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.858 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.858 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.858 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.858 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.858 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.858 { 00:16:02.858 "cntlid": 85, 00:16:02.858 "qid": 0, 00:16:02.858 "state": "enabled", 00:16:02.858 "thread": "nvmf_tgt_poll_group_000", 00:16:02.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:02.858 "listen_address": { 00:16:02.858 "trtype": "TCP", 00:16:02.858 "adrfam": "IPv4", 00:16:02.858 "traddr": "10.0.0.3", 00:16:02.858 "trsvcid": "4420" 00:16:02.858 }, 00:16:02.858 "peer_address": { 00:16:02.858 "trtype": "TCP", 00:16:02.858 "adrfam": "IPv4", 00:16:02.858 "traddr": "10.0.0.1", 00:16:02.858 "trsvcid": "46934" 00:16:02.858 }, 00:16:02.858 "auth": { 00:16:02.858 "state": "completed", 00:16:02.858 "digest": "sha384", 00:16:02.858 "dhgroup": "ffdhe6144" 00:16:02.858 } 00:16:02.858 } 00:16:02.858 ]' 00:16:02.858 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.117 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.117 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.117 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:03.117 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.117 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.117 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.117 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.376 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:16:03.376 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:16:03.944 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.944 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:03.944 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.944 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.944 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.944 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.944 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:03.944 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:04.203 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:04.203 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.203 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:04.203 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:04.203 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:04.203 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.203 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key3 00:16:04.203 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.203 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.203 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.203 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:04.203 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.203 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.771 00:16:04.771 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.771 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.771 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.030 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.030 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.030 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.030 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.031 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.031 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.031 { 00:16:05.031 "cntlid": 87, 00:16:05.031 "qid": 0, 00:16:05.031 "state": "enabled", 00:16:05.031 "thread": "nvmf_tgt_poll_group_000", 00:16:05.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:05.031 "listen_address": { 00:16:05.031 "trtype": "TCP", 00:16:05.031 "adrfam": "IPv4", 00:16:05.031 "traddr": "10.0.0.3", 00:16:05.031 "trsvcid": "4420" 00:16:05.031 }, 00:16:05.031 "peer_address": { 00:16:05.031 "trtype": "TCP", 00:16:05.031 "adrfam": "IPv4", 00:16:05.031 "traddr": "10.0.0.1", 00:16:05.031 "trsvcid": "46968" 00:16:05.031 }, 00:16:05.031 "auth": { 00:16:05.031 "state": "completed", 00:16:05.031 "digest": "sha384", 00:16:05.031 "dhgroup": "ffdhe6144" 00:16:05.031 } 00:16:05.031 } 00:16:05.031 ]' 00:16:05.031 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.031 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.031 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.290 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:05.290 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.290 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.290 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.290 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.549 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:16:05.549 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:16:06.117 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.117 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:06.117 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.117 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.117 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.117 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.117 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.117 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:06.117 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:06.376 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:06.376 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.376 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.376 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:06.376 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:06.376 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.376 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.376 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.376 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.376 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.376 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.376 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.377 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.947 00:16:06.947 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.947 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.947 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.205 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.206 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.206 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.206 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.206 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.206 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.206 { 00:16:07.206 "cntlid": 89, 00:16:07.206 "qid": 0, 00:16:07.206 "state": "enabled", 00:16:07.206 "thread": "nvmf_tgt_poll_group_000", 00:16:07.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:07.206 "listen_address": { 00:16:07.206 "trtype": "TCP", 00:16:07.206 "adrfam": "IPv4", 00:16:07.206 "traddr": "10.0.0.3", 00:16:07.206 "trsvcid": "4420" 00:16:07.206 }, 00:16:07.206 "peer_address": { 00:16:07.206 "trtype": "TCP", 00:16:07.206 "adrfam": "IPv4", 00:16:07.206 "traddr": "10.0.0.1", 00:16:07.206 "trsvcid": "47002" 00:16:07.206 }, 00:16:07.206 "auth": { 00:16:07.206 "state": "completed", 00:16:07.206 "digest": "sha384", 00:16:07.206 "dhgroup": "ffdhe8192" 00:16:07.206 } 00:16:07.206 } 00:16:07.206 ]' 00:16:07.206 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.464 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.464 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.464 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:07.464 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.464 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.464 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.464 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.723 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:16:07.723 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:16:08.289 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.289 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:08.289 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.289 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.289 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.289 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.289 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:08.289 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:08.856 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:08.856 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.856 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.856 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:08.856 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:08.856 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.856 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.856 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.856 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.856 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.856 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.856 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.856 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.423 00:16:09.423 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.423 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.423 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.681 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.681 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.681 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.681 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.681 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.681 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.681 { 00:16:09.681 "cntlid": 91, 00:16:09.681 "qid": 0, 00:16:09.681 "state": "enabled", 00:16:09.681 "thread": "nvmf_tgt_poll_group_000", 00:16:09.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:09.681 "listen_address": { 00:16:09.681 "trtype": "TCP", 00:16:09.681 "adrfam": "IPv4", 00:16:09.681 "traddr": "10.0.0.3", 00:16:09.681 "trsvcid": "4420" 00:16:09.681 }, 00:16:09.681 "peer_address": { 00:16:09.681 "trtype": "TCP", 00:16:09.681 "adrfam": "IPv4", 00:16:09.681 "traddr": "10.0.0.1", 00:16:09.681 "trsvcid": "47024" 00:16:09.681 }, 00:16:09.681 "auth": { 00:16:09.681 "state": "completed", 00:16:09.681 "digest": "sha384", 00:16:09.681 "dhgroup": "ffdhe8192" 00:16:09.681 } 00:16:09.681 } 00:16:09.681 ]' 00:16:09.681 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.681 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.681 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.681 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:09.681 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.681 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.681 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.681 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.940 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:16:09.940 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:16:10.507 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.507 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:10.507 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.507 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.507 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.507 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.507 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:10.507 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:10.766 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:10.766 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.766 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:10.766 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:10.766 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:10.766 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.766 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.766 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.766 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.766 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.766 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.766 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.766 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.702 00:16:11.702 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.702 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.702 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.702 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.702 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.702 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.702 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.702 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.702 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.702 { 00:16:11.702 "cntlid": 93, 00:16:11.702 "qid": 0, 00:16:11.702 "state": "enabled", 00:16:11.702 "thread": "nvmf_tgt_poll_group_000", 00:16:11.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:11.702 "listen_address": { 00:16:11.702 "trtype": "TCP", 00:16:11.702 "adrfam": "IPv4", 00:16:11.702 "traddr": "10.0.0.3", 00:16:11.702 "trsvcid": "4420" 00:16:11.702 }, 00:16:11.702 "peer_address": { 00:16:11.702 "trtype": "TCP", 00:16:11.702 "adrfam": "IPv4", 00:16:11.702 "traddr": "10.0.0.1", 00:16:11.702 "trsvcid": "59620" 00:16:11.702 }, 00:16:11.702 "auth": { 00:16:11.702 "state": "completed", 00:16:11.702 "digest": "sha384", 00:16:11.702 "dhgroup": "ffdhe8192" 00:16:11.702 } 00:16:11.702 } 00:16:11.702 ]' 00:16:11.702 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.961 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:11.961 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.961 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:11.962 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.962 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.962 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.962 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.220 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:16:12.220 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:16:12.787 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.787 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:12.787 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.787 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.787 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.787 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.787 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:12.787 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:13.047 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:13.047 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.047 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:13.047 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:13.047 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:13.047 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.047 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key3 00:16:13.047 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.047 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.047 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.047 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:13.047 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.047 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.647 00:16:13.647 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.647 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.647 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.937 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.937 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.937 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.937 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.937 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.937 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.937 { 00:16:13.937 "cntlid": 95, 00:16:13.937 "qid": 0, 00:16:13.937 "state": "enabled", 00:16:13.937 "thread": "nvmf_tgt_poll_group_000", 00:16:13.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:13.937 "listen_address": { 00:16:13.937 "trtype": "TCP", 00:16:13.937 "adrfam": "IPv4", 00:16:13.937 "traddr": "10.0.0.3", 00:16:13.937 "trsvcid": "4420" 00:16:13.937 }, 00:16:13.937 "peer_address": { 00:16:13.937 "trtype": "TCP", 00:16:13.937 "adrfam": "IPv4", 00:16:13.937 "traddr": "10.0.0.1", 00:16:13.937 "trsvcid": "59646" 00:16:13.937 }, 00:16:13.937 "auth": { 00:16:13.937 "state": "completed", 00:16:13.937 "digest": "sha384", 00:16:13.937 "dhgroup": "ffdhe8192" 00:16:13.937 } 00:16:13.937 } 00:16:13.937 ]' 00:16:13.938 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.197 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:14.197 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.197 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:14.197 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.197 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.197 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.197 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.455 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:16:14.455 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:16:15.021 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.021 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:15.021 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.021 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.021 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.021 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:15.021 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.021 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.021 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:15.021 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:15.280 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:15.280 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.280 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:15.280 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:15.280 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:15.280 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.280 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.280 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.280 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.280 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.280 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.280 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.280 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.540 00:16:15.540 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.540 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.540 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.107 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.107 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.107 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.107 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.107 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.107 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.107 { 00:16:16.107 "cntlid": 97, 00:16:16.107 "qid": 0, 00:16:16.107 "state": "enabled", 00:16:16.107 "thread": "nvmf_tgt_poll_group_000", 00:16:16.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:16.107 "listen_address": { 00:16:16.107 "trtype": "TCP", 00:16:16.107 "adrfam": "IPv4", 00:16:16.107 "traddr": "10.0.0.3", 00:16:16.107 "trsvcid": "4420" 00:16:16.107 }, 00:16:16.107 "peer_address": { 00:16:16.107 "trtype": "TCP", 00:16:16.107 "adrfam": "IPv4", 00:16:16.107 "traddr": "10.0.0.1", 00:16:16.107 "trsvcid": "59680" 00:16:16.107 }, 00:16:16.107 "auth": { 00:16:16.107 "state": "completed", 00:16:16.107 "digest": "sha512", 00:16:16.107 "dhgroup": "null" 00:16:16.108 } 00:16:16.108 } 00:16:16.108 ]' 00:16:16.108 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.108 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:16.108 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.108 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:16.108 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.108 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.108 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.108 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.366 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:16:16.366 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:16:16.933 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.934 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:16.934 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.934 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.934 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.934 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.934 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:16.934 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:17.192 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:17.192 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.192 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:17.192 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:17.192 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:17.192 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.192 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.192 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.192 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.192 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.192 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.193 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.193 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.451 00:16:17.451 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.451 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.451 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.709 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.709 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.709 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.709 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.709 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.709 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.709 { 00:16:17.709 "cntlid": 99, 00:16:17.709 "qid": 0, 00:16:17.709 "state": "enabled", 00:16:17.709 "thread": "nvmf_tgt_poll_group_000", 00:16:17.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:17.709 "listen_address": { 00:16:17.709 "trtype": "TCP", 00:16:17.709 "adrfam": "IPv4", 00:16:17.709 "traddr": "10.0.0.3", 00:16:17.709 "trsvcid": "4420" 00:16:17.709 }, 00:16:17.709 "peer_address": { 00:16:17.709 "trtype": "TCP", 00:16:17.709 "adrfam": "IPv4", 00:16:17.709 "traddr": "10.0.0.1", 00:16:17.709 "trsvcid": "59690" 00:16:17.709 }, 00:16:17.709 "auth": { 00:16:17.709 "state": "completed", 00:16:17.709 "digest": "sha512", 00:16:17.709 "dhgroup": "null" 00:16:17.709 } 00:16:17.709 } 00:16:17.709 ]' 00:16:17.709 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.968 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.968 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.968 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:17.968 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.968 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.968 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.968 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.226 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:16:18.226 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:16:18.792 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.792 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:18.792 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.792 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.792 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.792 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.792 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:18.792 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:19.050 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:19.050 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.050 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:19.050 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:19.050 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:19.050 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.050 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.050 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.050 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.050 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.050 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.050 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.050 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.619 00:16:19.619 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.619 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.619 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.878 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.878 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.878 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.878 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.878 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.878 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.878 { 00:16:19.878 "cntlid": 101, 00:16:19.878 "qid": 0, 00:16:19.878 "state": "enabled", 00:16:19.878 "thread": "nvmf_tgt_poll_group_000", 00:16:19.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:19.878 "listen_address": { 00:16:19.878 "trtype": "TCP", 00:16:19.878 "adrfam": "IPv4", 00:16:19.878 "traddr": "10.0.0.3", 00:16:19.878 "trsvcid": "4420" 00:16:19.878 }, 00:16:19.878 "peer_address": { 00:16:19.878 "trtype": "TCP", 00:16:19.878 "adrfam": "IPv4", 00:16:19.878 "traddr": "10.0.0.1", 00:16:19.878 "trsvcid": "59708" 00:16:19.878 }, 00:16:19.878 "auth": { 00:16:19.878 "state": "completed", 00:16:19.878 "digest": "sha512", 00:16:19.878 "dhgroup": "null" 00:16:19.878 } 00:16:19.878 } 00:16:19.878 ]' 00:16:19.878 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.878 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.878 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.878 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:19.878 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.878 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.878 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.878 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.136 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:16:20.136 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:16:21.071 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.071 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:21.071 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.071 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.071 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.071 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.071 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:21.071 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:21.071 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:21.071 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.071 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:21.071 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:21.071 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:21.071 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.071 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key3 00:16:21.071 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.071 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.071 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.071 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:21.071 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.071 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.328 00:16:21.586 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.586 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.586 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.844 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.844 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.844 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.844 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.844 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.844 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.844 { 00:16:21.844 "cntlid": 103, 00:16:21.844 "qid": 0, 00:16:21.844 "state": "enabled", 00:16:21.844 "thread": "nvmf_tgt_poll_group_000", 00:16:21.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:21.844 "listen_address": { 00:16:21.844 "trtype": "TCP", 00:16:21.844 "adrfam": "IPv4", 00:16:21.844 "traddr": "10.0.0.3", 00:16:21.844 "trsvcid": "4420" 00:16:21.844 }, 00:16:21.844 "peer_address": { 00:16:21.844 "trtype": "TCP", 00:16:21.844 "adrfam": "IPv4", 00:16:21.844 "traddr": "10.0.0.1", 00:16:21.844 "trsvcid": "55278" 00:16:21.844 }, 00:16:21.844 "auth": { 00:16:21.844 "state": "completed", 00:16:21.844 "digest": "sha512", 00:16:21.844 "dhgroup": "null" 00:16:21.844 } 00:16:21.844 } 00:16:21.844 ]' 00:16:21.844 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.844 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.844 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.844 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:21.844 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.845 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.845 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.845 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.103 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:16:22.103 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:16:23.038 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.038 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:23.038 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.038 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.038 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.038 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.038 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.038 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:23.038 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:23.038 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:23.038 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.038 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:23.038 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:23.038 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:23.038 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.038 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.038 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.038 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.038 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.038 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.038 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.038 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.604 00:16:23.604 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.604 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.604 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.863 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.863 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.863 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.863 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.863 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.863 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.863 { 00:16:23.863 "cntlid": 105, 00:16:23.863 "qid": 0, 00:16:23.863 "state": "enabled", 00:16:23.863 "thread": "nvmf_tgt_poll_group_000", 00:16:23.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:23.863 "listen_address": { 00:16:23.863 "trtype": "TCP", 00:16:23.863 "adrfam": "IPv4", 00:16:23.863 "traddr": "10.0.0.3", 00:16:23.863 "trsvcid": "4420" 00:16:23.863 }, 00:16:23.863 "peer_address": { 00:16:23.863 "trtype": "TCP", 00:16:23.863 "adrfam": "IPv4", 00:16:23.863 "traddr": "10.0.0.1", 00:16:23.863 "trsvcid": "55308" 00:16:23.863 }, 00:16:23.863 "auth": { 00:16:23.863 "state": "completed", 00:16:23.863 "digest": "sha512", 00:16:23.863 "dhgroup": "ffdhe2048" 00:16:23.863 } 00:16:23.863 } 00:16:23.863 ]' 00:16:23.863 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.863 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.863 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.863 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:23.863 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.863 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.863 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.863 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.121 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:16:24.121 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:16:25.055 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.055 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:25.055 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.055 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.055 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.055 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.055 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:25.055 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:25.055 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:25.055 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.055 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:25.055 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:25.055 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:25.055 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.055 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.055 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.055 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.313 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.313 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.313 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.313 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.572 00:16:25.572 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.572 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.572 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.831 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.831 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.831 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.831 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.831 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.831 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.831 { 00:16:25.831 "cntlid": 107, 00:16:25.831 "qid": 0, 00:16:25.831 "state": "enabled", 00:16:25.831 "thread": "nvmf_tgt_poll_group_000", 00:16:25.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:25.831 "listen_address": { 00:16:25.831 "trtype": "TCP", 00:16:25.831 "adrfam": "IPv4", 00:16:25.831 "traddr": "10.0.0.3", 00:16:25.831 "trsvcid": "4420" 00:16:25.831 }, 00:16:25.831 "peer_address": { 00:16:25.831 "trtype": "TCP", 00:16:25.831 "adrfam": "IPv4", 00:16:25.831 "traddr": "10.0.0.1", 00:16:25.831 "trsvcid": "55338" 00:16:25.831 }, 00:16:25.831 "auth": { 00:16:25.831 "state": "completed", 00:16:25.831 "digest": "sha512", 00:16:25.831 "dhgroup": "ffdhe2048" 00:16:25.831 } 00:16:25.831 } 00:16:25.831 ]' 00:16:25.831 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.831 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.832 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.832 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:25.832 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.090 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.090 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.090 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.350 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:16:26.350 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:16:26.915 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.915 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:26.915 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.915 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.915 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.915 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.915 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:26.915 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:27.481 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:27.481 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.481 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:27.481 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:27.481 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:27.481 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.481 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.481 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.481 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.481 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.481 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.481 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.481 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.740 00:16:27.740 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.740 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.740 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.999 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.999 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.999 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.999 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.999 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.999 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.999 { 00:16:27.999 "cntlid": 109, 00:16:27.999 "qid": 0, 00:16:27.999 "state": "enabled", 00:16:27.999 "thread": "nvmf_tgt_poll_group_000", 00:16:27.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:27.999 "listen_address": { 00:16:27.999 "trtype": "TCP", 00:16:27.999 "adrfam": "IPv4", 00:16:27.999 "traddr": "10.0.0.3", 00:16:27.999 "trsvcid": "4420" 00:16:27.999 }, 00:16:27.999 "peer_address": { 00:16:27.999 "trtype": "TCP", 00:16:27.999 "adrfam": "IPv4", 00:16:27.999 "traddr": "10.0.0.1", 00:16:27.999 "trsvcid": "55358" 00:16:27.999 }, 00:16:27.999 "auth": { 00:16:27.999 "state": "completed", 00:16:27.999 "digest": "sha512", 00:16:27.999 "dhgroup": "ffdhe2048" 00:16:27.999 } 00:16:27.999 } 00:16:27.999 ]' 00:16:27.999 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.999 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.999 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.999 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:27.999 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.258 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.258 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.258 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.516 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:16:28.516 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:16:29.083 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.084 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:29.084 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.084 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.084 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.084 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.084 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:29.084 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:29.342 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:29.342 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.342 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.342 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:29.342 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:29.342 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.342 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key3 00:16:29.342 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.342 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.342 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.342 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:29.342 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.342 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.909 00:16:29.909 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.909 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.909 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.167 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.167 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.167 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.167 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.167 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.167 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.167 { 00:16:30.167 "cntlid": 111, 00:16:30.167 "qid": 0, 00:16:30.167 "state": "enabled", 00:16:30.167 "thread": "nvmf_tgt_poll_group_000", 00:16:30.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:30.167 "listen_address": { 00:16:30.167 "trtype": "TCP", 00:16:30.167 "adrfam": "IPv4", 00:16:30.167 "traddr": "10.0.0.3", 00:16:30.167 "trsvcid": "4420" 00:16:30.167 }, 00:16:30.167 "peer_address": { 00:16:30.167 "trtype": "TCP", 00:16:30.167 "adrfam": "IPv4", 00:16:30.167 "traddr": "10.0.0.1", 00:16:30.167 "trsvcid": "55400" 00:16:30.167 }, 00:16:30.167 "auth": { 00:16:30.167 "state": "completed", 00:16:30.167 "digest": "sha512", 00:16:30.167 "dhgroup": "ffdhe2048" 00:16:30.167 } 00:16:30.167 } 00:16:30.167 ]' 00:16:30.167 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.167 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.167 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.167 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:30.167 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.426 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.426 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.426 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.426 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:16:30.426 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:16:31.361 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.361 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:31.361 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.361 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.361 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.362 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:31.362 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.362 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:31.362 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:31.620 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:31.620 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.620 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.620 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:31.620 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:31.620 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.620 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.620 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.620 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.620 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.620 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.620 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.620 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.879 00:16:31.879 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.879 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.879 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.138 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.138 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.138 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.138 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.138 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.138 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.138 { 00:16:32.138 "cntlid": 113, 00:16:32.138 "qid": 0, 00:16:32.138 "state": "enabled", 00:16:32.138 "thread": "nvmf_tgt_poll_group_000", 00:16:32.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:32.138 "listen_address": { 00:16:32.138 "trtype": "TCP", 00:16:32.138 "adrfam": "IPv4", 00:16:32.138 "traddr": "10.0.0.3", 00:16:32.138 "trsvcid": "4420" 00:16:32.138 }, 00:16:32.138 "peer_address": { 00:16:32.138 "trtype": "TCP", 00:16:32.138 "adrfam": "IPv4", 00:16:32.138 "traddr": "10.0.0.1", 00:16:32.138 "trsvcid": "39496" 00:16:32.138 }, 00:16:32.138 "auth": { 00:16:32.138 "state": "completed", 00:16:32.138 "digest": "sha512", 00:16:32.138 "dhgroup": "ffdhe3072" 00:16:32.138 } 00:16:32.138 } 00:16:32.138 ]' 00:16:32.138 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.138 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.138 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.138 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:32.138 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.396 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.396 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.396 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.654 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:16:32.654 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:16:33.221 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.221 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:33.221 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.221 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.221 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.221 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.221 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:33.221 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:33.480 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:33.480 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.480 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:33.480 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:33.480 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:33.480 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.480 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.480 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.480 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.480 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.480 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.480 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.480 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.739 00:16:33.739 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.739 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.739 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.306 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.306 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.306 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.306 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.306 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.306 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.306 { 00:16:34.306 "cntlid": 115, 00:16:34.306 "qid": 0, 00:16:34.306 "state": "enabled", 00:16:34.306 "thread": "nvmf_tgt_poll_group_000", 00:16:34.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:34.306 "listen_address": { 00:16:34.306 "trtype": "TCP", 00:16:34.306 "adrfam": "IPv4", 00:16:34.306 "traddr": "10.0.0.3", 00:16:34.306 "trsvcid": "4420" 00:16:34.306 }, 00:16:34.306 "peer_address": { 00:16:34.306 "trtype": "TCP", 00:16:34.306 "adrfam": "IPv4", 00:16:34.306 "traddr": "10.0.0.1", 00:16:34.306 "trsvcid": "39524" 00:16:34.306 }, 00:16:34.306 "auth": { 00:16:34.306 "state": "completed", 00:16:34.306 "digest": "sha512", 00:16:34.306 "dhgroup": "ffdhe3072" 00:16:34.306 } 00:16:34.306 } 00:16:34.306 ]' 00:16:34.306 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.306 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.306 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.306 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:34.306 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.306 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.306 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.306 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.564 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:16:34.564 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:16:35.131 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.389 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:35.389 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.389 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.389 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.389 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.389 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:35.390 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:35.649 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:35.649 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.649 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:35.649 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:35.649 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:35.649 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.649 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.649 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.649 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.649 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.649 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.649 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.649 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.909 00:16:35.909 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.909 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.909 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.167 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.167 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.167 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.167 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.167 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.167 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.167 { 00:16:36.167 "cntlid": 117, 00:16:36.167 "qid": 0, 00:16:36.167 "state": "enabled", 00:16:36.167 "thread": "nvmf_tgt_poll_group_000", 00:16:36.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:36.167 "listen_address": { 00:16:36.167 "trtype": "TCP", 00:16:36.167 "adrfam": "IPv4", 00:16:36.167 "traddr": "10.0.0.3", 00:16:36.167 "trsvcid": "4420" 00:16:36.167 }, 00:16:36.167 "peer_address": { 00:16:36.167 "trtype": "TCP", 00:16:36.167 "adrfam": "IPv4", 00:16:36.167 "traddr": "10.0.0.1", 00:16:36.167 "trsvcid": "39552" 00:16:36.167 }, 00:16:36.167 "auth": { 00:16:36.167 "state": "completed", 00:16:36.167 "digest": "sha512", 00:16:36.167 "dhgroup": "ffdhe3072" 00:16:36.168 } 00:16:36.168 } 00:16:36.168 ]' 00:16:36.168 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.168 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.168 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.427 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:36.427 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.427 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.427 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.427 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.686 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:16:36.686 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:16:37.253 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.253 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:37.253 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.253 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.253 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.253 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.253 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:37.253 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:37.512 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:37.512 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.512 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:37.512 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:37.512 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.512 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.512 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key3 00:16:37.512 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.512 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.512 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.512 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.512 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.512 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.080 00:16:38.080 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.080 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.080 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.339 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.339 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.339 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.339 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.339 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.339 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.339 { 00:16:38.339 "cntlid": 119, 00:16:38.339 "qid": 0, 00:16:38.339 "state": "enabled", 00:16:38.339 "thread": "nvmf_tgt_poll_group_000", 00:16:38.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:38.339 "listen_address": { 00:16:38.339 "trtype": "TCP", 00:16:38.339 "adrfam": "IPv4", 00:16:38.339 "traddr": "10.0.0.3", 00:16:38.339 "trsvcid": "4420" 00:16:38.339 }, 00:16:38.339 "peer_address": { 00:16:38.339 "trtype": "TCP", 00:16:38.339 "adrfam": "IPv4", 00:16:38.339 "traddr": "10.0.0.1", 00:16:38.339 "trsvcid": "39582" 00:16:38.339 }, 00:16:38.339 "auth": { 00:16:38.339 "state": "completed", 00:16:38.339 "digest": "sha512", 00:16:38.339 "dhgroup": "ffdhe3072" 00:16:38.339 } 00:16:38.339 } 00:16:38.339 ]' 00:16:38.339 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.340 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.340 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.340 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:38.340 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.340 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.340 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.340 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.598 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:16:38.598 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:16:39.534 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.534 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:39.534 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.534 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.534 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.534 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.534 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.534 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:39.534 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:39.534 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:39.534 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.534 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:39.534 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:39.534 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:39.534 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.534 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.534 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.534 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.534 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.534 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.534 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.534 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.102 00:16:40.102 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.102 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.102 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.361 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.361 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.361 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.361 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.361 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.361 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.361 { 00:16:40.361 "cntlid": 121, 00:16:40.361 "qid": 0, 00:16:40.361 "state": "enabled", 00:16:40.361 "thread": "nvmf_tgt_poll_group_000", 00:16:40.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:40.361 "listen_address": { 00:16:40.361 "trtype": "TCP", 00:16:40.361 "adrfam": "IPv4", 00:16:40.361 "traddr": "10.0.0.3", 00:16:40.361 "trsvcid": "4420" 00:16:40.361 }, 00:16:40.361 "peer_address": { 00:16:40.361 "trtype": "TCP", 00:16:40.361 "adrfam": "IPv4", 00:16:40.361 "traddr": "10.0.0.1", 00:16:40.361 "trsvcid": "39622" 00:16:40.361 }, 00:16:40.361 "auth": { 00:16:40.361 "state": "completed", 00:16:40.361 "digest": "sha512", 00:16:40.361 "dhgroup": "ffdhe4096" 00:16:40.361 } 00:16:40.361 } 00:16:40.361 ]' 00:16:40.361 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.361 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.361 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.361 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:40.361 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.361 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.361 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.361 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.620 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:16:40.620 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:16:41.187 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.187 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:41.187 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.187 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.187 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.187 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.187 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:41.188 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:41.755 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:41.755 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.755 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.755 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:41.755 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:41.755 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.755 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.755 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.755 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.755 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.755 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.755 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.755 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.014 00:16:42.014 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.014 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.014 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.273 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.273 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.273 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.273 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.273 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.273 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.273 { 00:16:42.273 "cntlid": 123, 00:16:42.273 "qid": 0, 00:16:42.273 "state": "enabled", 00:16:42.273 "thread": "nvmf_tgt_poll_group_000", 00:16:42.273 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:42.273 "listen_address": { 00:16:42.273 "trtype": "TCP", 00:16:42.273 "adrfam": "IPv4", 00:16:42.273 "traddr": "10.0.0.3", 00:16:42.273 "trsvcid": "4420" 00:16:42.273 }, 00:16:42.273 "peer_address": { 00:16:42.273 "trtype": "TCP", 00:16:42.273 "adrfam": "IPv4", 00:16:42.273 "traddr": "10.0.0.1", 00:16:42.273 "trsvcid": "49936" 00:16:42.273 }, 00:16:42.273 "auth": { 00:16:42.273 "state": "completed", 00:16:42.273 "digest": "sha512", 00:16:42.273 "dhgroup": "ffdhe4096" 00:16:42.273 } 00:16:42.273 } 00:16:42.273 ]' 00:16:42.273 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.533 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.533 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.533 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:42.533 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.533 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.533 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.533 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.791 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:16:42.791 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:16:43.359 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.359 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:43.359 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.359 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.359 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.359 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.359 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:43.359 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:43.618 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:43.618 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.618 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.618 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:43.618 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:43.618 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.618 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.618 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.618 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.618 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.618 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.618 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.618 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.186 00:16:44.186 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.186 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.186 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.445 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.445 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.445 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.445 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.445 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.445 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.445 { 00:16:44.445 "cntlid": 125, 00:16:44.445 "qid": 0, 00:16:44.445 "state": "enabled", 00:16:44.445 "thread": "nvmf_tgt_poll_group_000", 00:16:44.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:44.445 "listen_address": { 00:16:44.445 "trtype": "TCP", 00:16:44.445 "adrfam": "IPv4", 00:16:44.445 "traddr": "10.0.0.3", 00:16:44.445 "trsvcid": "4420" 00:16:44.445 }, 00:16:44.445 "peer_address": { 00:16:44.445 "trtype": "TCP", 00:16:44.445 "adrfam": "IPv4", 00:16:44.445 "traddr": "10.0.0.1", 00:16:44.445 "trsvcid": "49956" 00:16:44.445 }, 00:16:44.445 "auth": { 00:16:44.445 "state": "completed", 00:16:44.445 "digest": "sha512", 00:16:44.445 "dhgroup": "ffdhe4096" 00:16:44.445 } 00:16:44.445 } 00:16:44.445 ]' 00:16:44.445 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.445 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.445 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.445 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:44.445 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.445 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.445 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.445 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.703 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:16:44.703 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:16:45.665 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.665 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:45.665 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.665 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.665 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.665 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.665 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:45.665 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:45.665 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:45.665 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.665 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:45.665 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:45.665 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:45.665 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.665 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key3 00:16:45.665 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.665 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.924 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.924 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:45.924 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.925 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.184 00:16:46.184 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.184 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.184 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.442 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.442 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.442 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.442 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.443 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.443 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.443 { 00:16:46.443 "cntlid": 127, 00:16:46.443 "qid": 0, 00:16:46.443 "state": "enabled", 00:16:46.443 "thread": "nvmf_tgt_poll_group_000", 00:16:46.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:46.443 "listen_address": { 00:16:46.443 "trtype": "TCP", 00:16:46.443 "adrfam": "IPv4", 00:16:46.443 "traddr": "10.0.0.3", 00:16:46.443 "trsvcid": "4420" 00:16:46.443 }, 00:16:46.443 "peer_address": { 00:16:46.443 "trtype": "TCP", 00:16:46.443 "adrfam": "IPv4", 00:16:46.443 "traddr": "10.0.0.1", 00:16:46.443 "trsvcid": "49974" 00:16:46.443 }, 00:16:46.443 "auth": { 00:16:46.443 "state": "completed", 00:16:46.443 "digest": "sha512", 00:16:46.443 "dhgroup": "ffdhe4096" 00:16:46.443 } 00:16:46.443 } 00:16:46.443 ]' 00:16:46.443 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.443 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.443 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.701 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:46.701 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.701 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.701 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.701 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.959 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:16:46.959 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:16:47.525 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.525 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:47.525 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.525 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.525 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.525 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.525 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.525 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:47.525 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:47.784 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:47.784 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.784 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:47.784 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:47.784 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:47.784 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.784 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.784 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.784 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.784 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.784 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.785 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.785 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.352 00:16:48.352 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.352 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.352 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.611 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.611 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.611 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.611 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.611 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.611 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.611 { 00:16:48.611 "cntlid": 129, 00:16:48.611 "qid": 0, 00:16:48.611 "state": "enabled", 00:16:48.611 "thread": "nvmf_tgt_poll_group_000", 00:16:48.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:48.611 "listen_address": { 00:16:48.611 "trtype": "TCP", 00:16:48.611 "adrfam": "IPv4", 00:16:48.611 "traddr": "10.0.0.3", 00:16:48.611 "trsvcid": "4420" 00:16:48.611 }, 00:16:48.611 "peer_address": { 00:16:48.611 "trtype": "TCP", 00:16:48.611 "adrfam": "IPv4", 00:16:48.611 "traddr": "10.0.0.1", 00:16:48.611 "trsvcid": "49992" 00:16:48.611 }, 00:16:48.611 "auth": { 00:16:48.611 "state": "completed", 00:16:48.611 "digest": "sha512", 00:16:48.611 "dhgroup": "ffdhe6144" 00:16:48.611 } 00:16:48.611 } 00:16:48.611 ]' 00:16:48.611 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.611 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.611 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.611 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:48.611 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.611 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.611 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.611 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.178 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:16:49.178 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:16:49.745 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.745 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:49.745 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.745 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.745 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.745 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.745 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:49.745 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:50.003 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:50.003 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.003 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:50.003 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:50.003 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:50.003 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.003 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.003 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.003 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.003 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.003 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.003 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.003 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.569 00:16:50.569 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.569 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.569 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.827 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.827 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.827 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.827 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.827 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.827 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.827 { 00:16:50.827 "cntlid": 131, 00:16:50.827 "qid": 0, 00:16:50.827 "state": "enabled", 00:16:50.827 "thread": "nvmf_tgt_poll_group_000", 00:16:50.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:50.827 "listen_address": { 00:16:50.827 "trtype": "TCP", 00:16:50.827 "adrfam": "IPv4", 00:16:50.827 "traddr": "10.0.0.3", 00:16:50.827 "trsvcid": "4420" 00:16:50.827 }, 00:16:50.827 "peer_address": { 00:16:50.827 "trtype": "TCP", 00:16:50.827 "adrfam": "IPv4", 00:16:50.827 "traddr": "10.0.0.1", 00:16:50.827 "trsvcid": "45484" 00:16:50.827 }, 00:16:50.827 "auth": { 00:16:50.827 "state": "completed", 00:16:50.827 "digest": "sha512", 00:16:50.827 "dhgroup": "ffdhe6144" 00:16:50.827 } 00:16:50.827 } 00:16:50.827 ]' 00:16:50.827 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.827 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.827 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.827 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:50.827 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.086 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.086 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.086 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.343 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:16:51.343 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:16:51.908 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.908 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:51.908 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.908 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.908 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.908 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.908 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:51.908 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:52.166 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:52.166 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.166 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:52.166 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:52.166 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:52.166 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.166 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.166 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.166 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.166 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.166 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.167 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.167 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.734 00:16:52.734 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.734 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.734 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.993 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.993 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.993 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.993 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.993 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.993 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.993 { 00:16:52.993 "cntlid": 133, 00:16:52.993 "qid": 0, 00:16:52.993 "state": "enabled", 00:16:52.993 "thread": "nvmf_tgt_poll_group_000", 00:16:52.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:52.993 "listen_address": { 00:16:52.993 "trtype": "TCP", 00:16:52.993 "adrfam": "IPv4", 00:16:52.993 "traddr": "10.0.0.3", 00:16:52.993 "trsvcid": "4420" 00:16:52.993 }, 00:16:52.993 "peer_address": { 00:16:52.993 "trtype": "TCP", 00:16:52.993 "adrfam": "IPv4", 00:16:52.993 "traddr": "10.0.0.1", 00:16:52.993 "trsvcid": "45494" 00:16:52.993 }, 00:16:52.993 "auth": { 00:16:52.993 "state": "completed", 00:16:52.993 "digest": "sha512", 00:16:52.993 "dhgroup": "ffdhe6144" 00:16:52.993 } 00:16:52.993 } 00:16:52.993 ]' 00:16:52.993 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.993 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.993 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.253 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:53.253 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.253 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.253 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.253 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.511 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:16:53.511 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:16:54.079 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.079 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:54.337 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.337 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.337 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.337 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.337 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:54.337 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:54.595 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:54.595 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.595 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:54.595 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:54.595 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:54.595 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.595 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key3 00:16:54.595 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.595 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.595 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.595 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:54.595 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.595 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.191 00:16:55.191 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.191 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.191 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.449 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.449 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.449 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.449 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.449 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.449 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.449 { 00:16:55.449 "cntlid": 135, 00:16:55.449 "qid": 0, 00:16:55.449 "state": "enabled", 00:16:55.449 "thread": "nvmf_tgt_poll_group_000", 00:16:55.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:55.449 "listen_address": { 00:16:55.449 "trtype": "TCP", 00:16:55.449 "adrfam": "IPv4", 00:16:55.449 "traddr": "10.0.0.3", 00:16:55.449 "trsvcid": "4420" 00:16:55.449 }, 00:16:55.449 "peer_address": { 00:16:55.449 "trtype": "TCP", 00:16:55.449 "adrfam": "IPv4", 00:16:55.449 "traddr": "10.0.0.1", 00:16:55.449 "trsvcid": "45526" 00:16:55.449 }, 00:16:55.449 "auth": { 00:16:55.449 "state": "completed", 00:16:55.449 "digest": "sha512", 00:16:55.449 "dhgroup": "ffdhe6144" 00:16:55.449 } 00:16:55.449 } 00:16:55.449 ]' 00:16:55.449 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.449 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.449 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.449 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:55.449 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.709 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.709 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.709 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.968 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:16:55.968 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:16:56.537 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.537 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:56.537 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.537 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.537 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.537 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.537 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.537 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:56.537 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:57.104 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:57.104 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.104 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:57.104 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:57.104 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:57.104 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.104 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.104 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.104 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.104 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.105 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.105 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.105 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.672 00:16:57.672 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.672 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.672 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.930 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.930 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.930 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.930 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.930 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.930 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.930 { 00:16:57.930 "cntlid": 137, 00:16:57.930 "qid": 0, 00:16:57.930 "state": "enabled", 00:16:57.930 "thread": "nvmf_tgt_poll_group_000", 00:16:57.930 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:16:57.930 "listen_address": { 00:16:57.930 "trtype": "TCP", 00:16:57.930 "adrfam": "IPv4", 00:16:57.930 "traddr": "10.0.0.3", 00:16:57.930 "trsvcid": "4420" 00:16:57.930 }, 00:16:57.930 "peer_address": { 00:16:57.930 "trtype": "TCP", 00:16:57.930 "adrfam": "IPv4", 00:16:57.930 "traddr": "10.0.0.1", 00:16:57.930 "trsvcid": "45562" 00:16:57.930 }, 00:16:57.930 "auth": { 00:16:57.930 "state": "completed", 00:16:57.930 "digest": "sha512", 00:16:57.930 "dhgroup": "ffdhe8192" 00:16:57.930 } 00:16:57.930 } 00:16:57.930 ]' 00:16:57.930 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.930 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.930 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.930 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:57.930 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.189 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.189 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.189 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.447 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:16:58.447 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:16:59.383 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.383 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:16:59.383 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.383 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.383 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.383 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.383 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:59.383 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:59.383 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:59.383 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.383 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:59.383 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:59.383 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:59.383 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.383 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.383 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.383 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.383 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.383 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.383 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.383 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.319 00:17:00.319 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.319 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.319 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.319 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.319 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.319 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.319 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.319 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.319 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.319 { 00:17:00.319 "cntlid": 139, 00:17:00.319 "qid": 0, 00:17:00.319 "state": "enabled", 00:17:00.319 "thread": "nvmf_tgt_poll_group_000", 00:17:00.319 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:17:00.319 "listen_address": { 00:17:00.319 "trtype": "TCP", 00:17:00.319 "adrfam": "IPv4", 00:17:00.319 "traddr": "10.0.0.3", 00:17:00.319 "trsvcid": "4420" 00:17:00.319 }, 00:17:00.319 "peer_address": { 00:17:00.319 "trtype": "TCP", 00:17:00.319 "adrfam": "IPv4", 00:17:00.319 "traddr": "10.0.0.1", 00:17:00.319 "trsvcid": "45592" 00:17:00.319 }, 00:17:00.319 "auth": { 00:17:00.319 "state": "completed", 00:17:00.319 "digest": "sha512", 00:17:00.319 "dhgroup": "ffdhe8192" 00:17:00.319 } 00:17:00.319 } 00:17:00.319 ]' 00:17:00.319 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.578 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.578 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.578 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.578 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.578 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.578 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.578 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.837 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:17:00.837 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: --dhchap-ctrl-secret DHHC-1:02:ODUxNjUzOGM5OWRlZWVmYjcxMGVkODU1ZWRjMjQ1MTA3NzM0ZjE1Mzg5ODk2ZmRjPkjw5Q==: 00:17:01.405 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.405 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:17:01.405 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.405 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.405 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.405 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.405 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:01.405 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:01.664 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:01.664 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.664 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:01.664 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:01.664 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:01.664 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.664 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.664 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.664 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.664 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.664 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.664 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.664 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.231 00:17:02.231 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.232 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.232 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.491 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.491 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.491 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.491 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.491 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.749 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.749 { 00:17:02.749 "cntlid": 141, 00:17:02.749 "qid": 0, 00:17:02.749 "state": "enabled", 00:17:02.749 "thread": "nvmf_tgt_poll_group_000", 00:17:02.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:17:02.749 "listen_address": { 00:17:02.749 "trtype": "TCP", 00:17:02.749 "adrfam": "IPv4", 00:17:02.749 "traddr": "10.0.0.3", 00:17:02.750 "trsvcid": "4420" 00:17:02.750 }, 00:17:02.750 "peer_address": { 00:17:02.750 "trtype": "TCP", 00:17:02.750 "adrfam": "IPv4", 00:17:02.750 "traddr": "10.0.0.1", 00:17:02.750 "trsvcid": "46084" 00:17:02.750 }, 00:17:02.750 "auth": { 00:17:02.750 "state": "completed", 00:17:02.750 "digest": "sha512", 00:17:02.750 "dhgroup": "ffdhe8192" 00:17:02.750 } 00:17:02.750 } 00:17:02.750 ]' 00:17:02.750 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.750 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.750 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.750 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:02.750 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.750 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.750 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.750 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.009 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:17:03.009 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:01:MzgxMGJiNzJlYzZjMTEyMjFlMjU0NWViYmFkYWYxZmPJUshH: 00:17:03.576 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.576 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:17:03.576 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.576 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.576 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.576 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.576 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:03.576 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:03.835 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:03.835 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.835 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:03.835 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:03.835 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:03.835 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.835 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key3 00:17:03.835 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.835 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.835 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.835 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:03.835 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.835 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.401 00:17:04.401 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.401 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.401 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.968 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.968 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.968 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.968 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.968 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.968 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.968 { 00:17:04.968 "cntlid": 143, 00:17:04.968 "qid": 0, 00:17:04.968 "state": "enabled", 00:17:04.968 "thread": "nvmf_tgt_poll_group_000", 00:17:04.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:17:04.968 "listen_address": { 00:17:04.968 "trtype": "TCP", 00:17:04.968 "adrfam": "IPv4", 00:17:04.968 "traddr": "10.0.0.3", 00:17:04.968 "trsvcid": "4420" 00:17:04.968 }, 00:17:04.968 "peer_address": { 00:17:04.968 "trtype": "TCP", 00:17:04.968 "adrfam": "IPv4", 00:17:04.968 "traddr": "10.0.0.1", 00:17:04.968 "trsvcid": "46118" 00:17:04.968 }, 00:17:04.968 "auth": { 00:17:04.968 "state": "completed", 00:17:04.968 "digest": "sha512", 00:17:04.968 "dhgroup": "ffdhe8192" 00:17:04.968 } 00:17:04.968 } 00:17:04.968 ]' 00:17:04.968 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.968 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.968 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.968 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.968 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.968 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.968 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.968 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.227 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:17:05.227 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:17:05.795 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.795 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:17:05.795 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.795 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.795 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.795 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:05.795 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:05.795 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:05.795 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:05.795 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:05.795 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:06.361 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:06.361 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.361 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:06.361 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:06.361 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:06.361 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.361 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.361 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.361 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.361 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.361 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.361 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.361 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.928 00:17:06.928 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.928 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.928 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.186 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.186 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.186 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.186 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.186 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.186 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.186 { 00:17:07.186 "cntlid": 145, 00:17:07.186 "qid": 0, 00:17:07.186 "state": "enabled", 00:17:07.186 "thread": "nvmf_tgt_poll_group_000", 00:17:07.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:17:07.186 "listen_address": { 00:17:07.186 "trtype": "TCP", 00:17:07.186 "adrfam": "IPv4", 00:17:07.186 "traddr": "10.0.0.3", 00:17:07.186 "trsvcid": "4420" 00:17:07.186 }, 00:17:07.186 "peer_address": { 00:17:07.186 "trtype": "TCP", 00:17:07.186 "adrfam": "IPv4", 00:17:07.186 "traddr": "10.0.0.1", 00:17:07.186 "trsvcid": "46144" 00:17:07.186 }, 00:17:07.186 "auth": { 00:17:07.186 "state": "completed", 00:17:07.186 "digest": "sha512", 00:17:07.186 "dhgroup": "ffdhe8192" 00:17:07.186 } 00:17:07.186 } 00:17:07.186 ]' 00:17:07.186 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.186 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.186 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.186 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:07.186 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.186 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.186 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.186 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.753 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:17:07.753 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:00:Y2FhYWQ3MDllNzgxY2M0MTc2NmI2Y2I5NDA5YmY5NTk2Y2VmOTU2Mzk0ZmRkYmVi5bMzRA==: --dhchap-ctrl-secret DHHC-1:03:ZTkyMTZjNTUwNjdiOTczMGU0ZjRhNzFhMjZiZjcxOWIwZTJiZjQwOWNhMzNiNjA5MjYzMDE3OGY0YmExM2JhZCWGCkI=: 00:17:08.321 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.321 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:17:08.321 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.321 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.321 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.321 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key1 00:17:08.321 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.321 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.321 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.321 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:08.321 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:08.321 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:08.321 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:08.321 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.321 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:08.321 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.321 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:08.321 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:08.321 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:08.917 request: 00:17:08.917 { 00:17:08.917 "name": "nvme0", 00:17:08.917 "trtype": "tcp", 00:17:08.917 "traddr": "10.0.0.3", 00:17:08.917 "adrfam": "ipv4", 00:17:08.917 "trsvcid": "4420", 00:17:08.917 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:08.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:17:08.917 "prchk_reftag": false, 00:17:08.917 "prchk_guard": false, 00:17:08.917 "hdgst": false, 00:17:08.917 "ddgst": false, 00:17:08.917 "dhchap_key": "key2", 00:17:08.917 "allow_unrecognized_csi": false, 00:17:08.917 "method": "bdev_nvme_attach_controller", 00:17:08.917 "req_id": 1 00:17:08.917 } 00:17:08.917 Got JSON-RPC error response 00:17:08.917 response: 00:17:08.917 { 00:17:08.917 "code": -5, 00:17:08.917 "message": "Input/output error" 00:17:08.917 } 00:17:08.917 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:08.917 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:08.917 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:08.917 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:08.917 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:17:08.917 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.917 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.917 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.917 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.917 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.917 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.917 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.917 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:08.918 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:08.918 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:08.918 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:08.918 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.918 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:08.918 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.918 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:08.918 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:08.918 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:09.489 request: 00:17:09.490 { 00:17:09.490 "name": "nvme0", 00:17:09.490 "trtype": "tcp", 00:17:09.490 "traddr": "10.0.0.3", 00:17:09.490 "adrfam": "ipv4", 00:17:09.490 "trsvcid": "4420", 00:17:09.490 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:09.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:17:09.490 "prchk_reftag": false, 00:17:09.490 "prchk_guard": false, 00:17:09.490 "hdgst": false, 00:17:09.490 "ddgst": false, 00:17:09.490 "dhchap_key": "key1", 00:17:09.490 "dhchap_ctrlr_key": "ckey2", 00:17:09.490 "allow_unrecognized_csi": false, 00:17:09.490 "method": "bdev_nvme_attach_controller", 00:17:09.490 "req_id": 1 00:17:09.490 } 00:17:09.490 Got JSON-RPC error response 00:17:09.490 response: 00:17:09.490 { 00:17:09.490 "code": -5, 00:17:09.490 "message": "Input/output error" 00:17:09.490 } 00:17:09.490 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:09.490 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.490 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.490 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.490 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:17:09.490 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.490 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.490 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.490 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key1 00:17:09.490 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.490 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.490 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.490 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.490 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:09.490 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.490 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:09.490 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.490 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:09.490 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.490 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.490 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.490 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.058 request: 00:17:10.058 { 00:17:10.058 "name": "nvme0", 00:17:10.058 "trtype": "tcp", 00:17:10.058 "traddr": "10.0.0.3", 00:17:10.058 "adrfam": "ipv4", 00:17:10.058 "trsvcid": "4420", 00:17:10.058 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:10.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:17:10.058 "prchk_reftag": false, 00:17:10.058 "prchk_guard": false, 00:17:10.059 "hdgst": false, 00:17:10.059 "ddgst": false, 00:17:10.059 "dhchap_key": "key1", 00:17:10.059 "dhchap_ctrlr_key": "ckey1", 00:17:10.059 "allow_unrecognized_csi": false, 00:17:10.059 "method": "bdev_nvme_attach_controller", 00:17:10.059 "req_id": 1 00:17:10.059 } 00:17:10.059 Got JSON-RPC error response 00:17:10.059 response: 00:17:10.059 { 00:17:10.059 "code": -5, 00:17:10.059 "message": "Input/output error" 00:17:10.059 } 00:17:10.059 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:10.059 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:10.059 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:10.059 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:10.059 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:17:10.059 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.059 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.059 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.059 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67585 00:17:10.059 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67585 ']' 00:17:10.059 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67585 00:17:10.059 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:10.059 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:10.059 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67585 00:17:10.059 killing process with pid 67585 00:17:10.059 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:10.059 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:10.059 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67585' 00:17:10.059 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67585 00:17:10.059 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67585 00:17:10.318 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:10.318 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:10.318 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:10.318 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.318 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70646 00:17:10.318 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:10.318 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70646 00:17:10.318 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70646 ']' 00:17:10.318 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.318 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.318 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.318 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.318 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.577 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:10.577 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:10.577 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:10.577 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:10.577 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.577 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.577 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:10.577 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70646 00:17:10.577 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70646 ']' 00:17:10.577 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.577 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.577 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.577 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.577 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.836 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:10.836 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:10.836 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:10.836 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.836 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.836 null0 00:17:11.095 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.095 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:11.095 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.FEo 00:17:11.095 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.095 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.095 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Idj ]] 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Idj 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kWO 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.qg5 ]] 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qg5 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.blT 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.L2h ]] 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L2h 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.aaF 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key3 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.096 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.033 nvme0n1 00:17:12.033 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.033 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.033 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.292 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.292 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.292 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.292 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.292 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.292 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.292 { 00:17:12.292 "cntlid": 1, 00:17:12.292 "qid": 0, 00:17:12.292 "state": "enabled", 00:17:12.292 "thread": "nvmf_tgt_poll_group_000", 00:17:12.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:17:12.292 "listen_address": { 00:17:12.292 "trtype": "TCP", 00:17:12.292 "adrfam": "IPv4", 00:17:12.292 "traddr": "10.0.0.3", 00:17:12.292 "trsvcid": "4420" 00:17:12.292 }, 00:17:12.292 "peer_address": { 00:17:12.292 "trtype": "TCP", 00:17:12.292 "adrfam": "IPv4", 00:17:12.292 "traddr": "10.0.0.1", 00:17:12.292 "trsvcid": "60560" 00:17:12.292 }, 00:17:12.292 "auth": { 00:17:12.292 "state": "completed", 00:17:12.292 "digest": "sha512", 00:17:12.292 "dhgroup": "ffdhe8192" 00:17:12.292 } 00:17:12.292 } 00:17:12.292 ]' 00:17:12.292 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.292 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.292 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.292 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:12.292 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.292 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.292 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.292 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.551 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:17:12.551 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:17:13.489 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.489 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:17:13.489 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.489 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.489 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.489 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key3 00:17:13.489 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.489 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.489 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.489 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:13.489 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:13.747 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:13.748 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:13.748 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:13.748 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:13.748 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.748 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:13.748 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.748 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:13.748 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.748 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.005 request: 00:17:14.005 { 00:17:14.005 "name": "nvme0", 00:17:14.005 "trtype": "tcp", 00:17:14.005 "traddr": "10.0.0.3", 00:17:14.005 "adrfam": "ipv4", 00:17:14.005 "trsvcid": "4420", 00:17:14.005 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:14.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:17:14.005 "prchk_reftag": false, 00:17:14.005 "prchk_guard": false, 00:17:14.005 "hdgst": false, 00:17:14.005 "ddgst": false, 00:17:14.005 "dhchap_key": "key3", 00:17:14.005 "allow_unrecognized_csi": false, 00:17:14.005 "method": "bdev_nvme_attach_controller", 00:17:14.005 "req_id": 1 00:17:14.005 } 00:17:14.005 Got JSON-RPC error response 00:17:14.005 response: 00:17:14.005 { 00:17:14.005 "code": -5, 00:17:14.005 "message": "Input/output error" 00:17:14.005 } 00:17:14.005 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:14.005 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:14.005 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:14.005 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:14.005 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:14.005 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:14.005 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:14.005 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:14.264 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:14.264 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:14.264 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:14.264 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:14.264 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.264 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:14.264 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.264 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:14.264 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.264 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.523 request: 00:17:14.523 { 00:17:14.523 "name": "nvme0", 00:17:14.523 "trtype": "tcp", 00:17:14.523 "traddr": "10.0.0.3", 00:17:14.523 "adrfam": "ipv4", 00:17:14.523 "trsvcid": "4420", 00:17:14.523 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:14.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:17:14.523 "prchk_reftag": false, 00:17:14.523 "prchk_guard": false, 00:17:14.523 "hdgst": false, 00:17:14.523 "ddgst": false, 00:17:14.523 "dhchap_key": "key3", 00:17:14.523 "allow_unrecognized_csi": false, 00:17:14.523 "method": "bdev_nvme_attach_controller", 00:17:14.523 "req_id": 1 00:17:14.523 } 00:17:14.523 Got JSON-RPC error response 00:17:14.523 response: 00:17:14.523 { 00:17:14.523 "code": -5, 00:17:14.523 "message": "Input/output error" 00:17:14.523 } 00:17:14.523 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:14.523 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:14.523 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:14.523 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:14.523 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:14.523 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:14.523 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:14.523 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:14.523 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:14.524 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:14.782 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:17:14.782 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.782 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.782 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.782 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:17:14.782 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.782 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.782 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.782 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:14.782 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:14.782 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:14.783 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:14.783 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.783 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:14.783 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.783 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:14.783 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:14.783 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:15.350 request: 00:17:15.350 { 00:17:15.350 "name": "nvme0", 00:17:15.350 "trtype": "tcp", 00:17:15.350 "traddr": "10.0.0.3", 00:17:15.350 "adrfam": "ipv4", 00:17:15.350 "trsvcid": "4420", 00:17:15.350 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:15.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:17:15.350 "prchk_reftag": false, 00:17:15.350 "prchk_guard": false, 00:17:15.350 "hdgst": false, 00:17:15.350 "ddgst": false, 00:17:15.350 "dhchap_key": "key0", 00:17:15.350 "dhchap_ctrlr_key": "key1", 00:17:15.350 "allow_unrecognized_csi": false, 00:17:15.350 "method": "bdev_nvme_attach_controller", 00:17:15.350 "req_id": 1 00:17:15.350 } 00:17:15.350 Got JSON-RPC error response 00:17:15.350 response: 00:17:15.350 { 00:17:15.350 "code": -5, 00:17:15.350 "message": "Input/output error" 00:17:15.350 } 00:17:15.350 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:15.350 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:15.350 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:15.350 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:15.350 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:15.350 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:15.351 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:15.608 nvme0n1 00:17:15.608 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:15.608 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.608 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:15.866 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.866 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.866 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.125 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key1 00:17:16.125 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.125 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.125 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.125 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:16.125 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:16.125 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:17.062 nvme0n1 00:17:17.062 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:17.062 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.062 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:17.062 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.062 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:17.062 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.062 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.062 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.062 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:17.062 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.062 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:17.629 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.629 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:17:17.629 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid 34bde053-797d-42f4-ad97-2a3b315837d0 -l 0 --dhchap-secret DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: --dhchap-ctrl-secret DHHC-1:03:YTdiYjAzNjYzZDJiYzg3MjIwN2M4NWE0YTAxM2MwMzQ4ZWU3YTc1NTQzMWExYTZkNzYxYTU5YjFmM2UyYjQ1MDmBJkM=: 00:17:18.197 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:18.197 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:18.197 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:18.197 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:18.197 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:18.197 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:18.197 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:18.197 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.197 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.455 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:18.455 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:18.455 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:18.455 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:18.455 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.455 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:18.455 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.455 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:18.455 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:18.455 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:19.022 request: 00:17:19.022 { 00:17:19.022 "name": "nvme0", 00:17:19.022 "trtype": "tcp", 00:17:19.022 "traddr": "10.0.0.3", 00:17:19.022 "adrfam": "ipv4", 00:17:19.022 "trsvcid": "4420", 00:17:19.022 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:19.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0", 00:17:19.022 "prchk_reftag": false, 00:17:19.022 "prchk_guard": false, 00:17:19.022 "hdgst": false, 00:17:19.022 "ddgst": false, 00:17:19.022 "dhchap_key": "key1", 00:17:19.022 "allow_unrecognized_csi": false, 00:17:19.022 "method": "bdev_nvme_attach_controller", 00:17:19.022 "req_id": 1 00:17:19.022 } 00:17:19.022 Got JSON-RPC error response 00:17:19.022 response: 00:17:19.022 { 00:17:19.022 "code": -5, 00:17:19.022 "message": "Input/output error" 00:17:19.022 } 00:17:19.022 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:19.022 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:19.022 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:19.022 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:19.022 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:19.022 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:19.022 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:19.958 nvme0n1 00:17:19.958 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:19.958 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.958 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:19.958 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.958 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.958 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.216 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:17:20.216 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.216 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.216 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.216 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:20.216 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:20.216 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:20.783 nvme0n1 00:17:20.783 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:20.783 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.783 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:21.041 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.041 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.041 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.299 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:21.299 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.299 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.299 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.299 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: '' 2s 00:17:21.299 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:21.299 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:21.299 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: 00:17:21.299 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:21.299 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:21.299 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:21.299 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: ]] 00:17:21.299 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YjU2NmU1YjEzNjNmZjliYjA1YmZlNTAyZTViNzJiNmIZbe81: 00:17:21.299 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:21.299 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:21.299 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:23.201 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:23.201 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:23.201 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:23.201 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:23.201 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:23.201 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:23.201 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:23.201 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:23.201 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.201 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.201 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.201 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: 2s 00:17:23.201 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:23.201 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:23.201 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:23.201 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: 00:17:23.201 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:23.201 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:23.201 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:23.201 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: ]] 00:17:23.201 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YWU2MjYzYmNjYTk3NWUxOGE1NzM0YzIzMjcxMWM0YmM1ODQ1YzEwNjQ1MTA2MWVhbajMcA==: 00:17:23.201 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:23.201 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:25.132 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:25.132 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:25.132 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:25.132 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:25.132 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:25.132 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:25.390 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:25.390 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.390 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:25.390 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.390 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.390 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.390 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:25.390 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:25.390 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:26.326 nvme0n1 00:17:26.326 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:26.326 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.326 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.326 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.326 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:26.326 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:26.893 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:26.893 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.893 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:27.151 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.151 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:17:27.151 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.151 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.152 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.152 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:27.152 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:27.410 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:27.410 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:27.410 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.669 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.669 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:27.669 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.669 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.669 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.669 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:27.669 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:27.669 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:27.669 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:27.669 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:27.669 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:27.669 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:27.669 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:27.669 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:28.236 request: 00:17:28.236 { 00:17:28.236 "name": "nvme0", 00:17:28.236 "dhchap_key": "key1", 00:17:28.236 "dhchap_ctrlr_key": "key3", 00:17:28.236 "method": "bdev_nvme_set_keys", 00:17:28.237 "req_id": 1 00:17:28.237 } 00:17:28.237 Got JSON-RPC error response 00:17:28.237 response: 00:17:28.237 { 00:17:28.237 "code": -13, 00:17:28.237 "message": "Permission denied" 00:17:28.237 } 00:17:28.237 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:28.237 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:28.237 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:28.237 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:28.237 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:28.237 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.237 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:28.495 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:28.495 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:29.870 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:29.870 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.870 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:29.870 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:29.870 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:29.870 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.870 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.870 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.870 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:29.870 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:29.870 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:30.806 nvme0n1 00:17:30.806 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:30.806 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.806 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.806 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.806 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:30.806 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:30.806 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:30.806 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:30.806 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:30.806 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:30.806 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:30.806 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:30.806 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:31.372 request: 00:17:31.372 { 00:17:31.372 "name": "nvme0", 00:17:31.372 "dhchap_key": "key2", 00:17:31.372 "dhchap_ctrlr_key": "key0", 00:17:31.372 "method": "bdev_nvme_set_keys", 00:17:31.372 "req_id": 1 00:17:31.372 } 00:17:31.372 Got JSON-RPC error response 00:17:31.372 response: 00:17:31.372 { 00:17:31.372 "code": -13, 00:17:31.372 "message": "Permission denied" 00:17:31.372 } 00:17:31.372 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:31.372 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:31.372 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:31.372 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:31.372 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:31.372 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:31.372 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.631 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:31.631 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:33.007 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:33.007 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.007 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:33.007 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:33.007 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:33.007 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:33.007 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67617 00:17:33.007 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67617 ']' 00:17:33.007 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67617 00:17:33.007 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:33.007 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.007 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67617 00:17:33.007 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:33.007 killing process with pid 67617 00:17:33.007 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:33.007 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67617' 00:17:33.007 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67617 00:17:33.007 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67617 00:17:33.575 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:33.575 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:33.575 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:33.575 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:33.575 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:33.575 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:33.575 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:33.575 rmmod nvme_tcp 00:17:33.575 rmmod nvme_fabrics 00:17:33.575 rmmod nvme_keyring 00:17:33.575 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:33.835 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:33.835 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:33.835 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70646 ']' 00:17:33.835 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70646 00:17:33.835 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70646 ']' 00:17:33.835 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70646 00:17:33.835 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:33.835 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.835 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70646 00:17:33.835 killing process with pid 70646 00:17:33.835 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:33.835 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:33.835 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70646' 00:17:33.835 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70646 00:17:33.835 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70646 00:17:33.835 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:33.835 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:33.835 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:33.835 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:33.835 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:33.835 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:33.835 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:33.835 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:33.835 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:33.835 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:34.094 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:34.094 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:34.094 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:34.094 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:34.094 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:34.094 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:34.094 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:34.094 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:34.094 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:34.094 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:34.094 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:34.094 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:34.094 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:34.094 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.094 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.094 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.094 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:17:34.094 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.FEo /tmp/spdk.key-sha256.kWO /tmp/spdk.key-sha384.blT /tmp/spdk.key-sha512.aaF /tmp/spdk.key-sha512.Idj /tmp/spdk.key-sha384.qg5 /tmp/spdk.key-sha256.L2h '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:17:34.094 00:17:34.094 real 3m8.316s 00:17:34.094 user 7m29.640s 00:17:34.094 sys 0m29.543s 00:17:34.094 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:34.094 ************************************ 00:17:34.094 END TEST nvmf_auth_target 00:17:34.094 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.094 ************************************ 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:34.351 ************************************ 00:17:34.351 START TEST nvmf_bdevio_no_huge 00:17:34.351 ************************************ 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:34.351 * Looking for test storage... 00:17:34.351 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:34.351 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:34.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.352 --rc genhtml_branch_coverage=1 00:17:34.352 --rc genhtml_function_coverage=1 00:17:34.352 --rc genhtml_legend=1 00:17:34.352 --rc geninfo_all_blocks=1 00:17:34.352 --rc geninfo_unexecuted_blocks=1 00:17:34.352 00:17:34.352 ' 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:34.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.352 --rc genhtml_branch_coverage=1 00:17:34.352 --rc genhtml_function_coverage=1 00:17:34.352 --rc genhtml_legend=1 00:17:34.352 --rc geninfo_all_blocks=1 00:17:34.352 --rc geninfo_unexecuted_blocks=1 00:17:34.352 00:17:34.352 ' 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:34.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.352 --rc genhtml_branch_coverage=1 00:17:34.352 --rc genhtml_function_coverage=1 00:17:34.352 --rc genhtml_legend=1 00:17:34.352 --rc geninfo_all_blocks=1 00:17:34.352 --rc geninfo_unexecuted_blocks=1 00:17:34.352 00:17:34.352 ' 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:34.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.352 --rc genhtml_branch_coverage=1 00:17:34.352 --rc genhtml_function_coverage=1 00:17:34.352 --rc genhtml_legend=1 00:17:34.352 --rc geninfo_all_blocks=1 00:17:34.352 --rc geninfo_unexecuted_blocks=1 00:17:34.352 00:17:34.352 ' 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.352 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:34.615 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.615 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:34.616 Cannot find device "nvmf_init_br" 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:34.616 Cannot find device "nvmf_init_br2" 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:34.616 Cannot find device "nvmf_tgt_br" 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:34.616 Cannot find device "nvmf_tgt_br2" 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:34.616 Cannot find device "nvmf_init_br" 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:34.616 Cannot find device "nvmf_init_br2" 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:34.616 Cannot find device "nvmf_tgt_br" 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:34.616 Cannot find device "nvmf_tgt_br2" 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:34.616 Cannot find device "nvmf_br" 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:34.616 Cannot find device "nvmf_init_if" 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:34.616 Cannot find device "nvmf_init_if2" 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:34.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:34.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:34.616 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:34.875 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:34.875 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.126 ms 00:17:34.875 00:17:34.875 --- 10.0.0.3 ping statistics --- 00:17:34.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.875 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:34.875 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:34.875 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:17:34.875 00:17:34.875 --- 10.0.0.4 ping statistics --- 00:17:34.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.875 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:34.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:34.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:34.875 00:17:34.875 --- 10.0.0.1 ping statistics --- 00:17:34.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.875 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:34.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:34.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:17:34.875 00:17:34.875 --- 10.0.0.2 ping statistics --- 00:17:34.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.875 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=71272 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 71272 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 71272 ']' 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.875 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:35.133 [2024-11-27 06:12:40.002999] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:17:35.133 [2024-11-27 06:12:40.003093] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:35.133 [2024-11-27 06:12:40.168495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:35.393 [2024-11-27 06:12:40.260992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.393 [2024-11-27 06:12:40.261105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.393 [2024-11-27 06:12:40.261119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.393 [2024-11-27 06:12:40.261147] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.393 [2024-11-27 06:12:40.261159] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.393 [2024-11-27 06:12:40.262329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:35.393 [2024-11-27 06:12:40.262482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:35.393 [2024-11-27 06:12:40.262626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:35.393 [2024-11-27 06:12:40.262638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:35.393 [2024-11-27 06:12:40.268893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.330 [2024-11-27 06:12:41.123804] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.330 Malloc0 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.330 [2024-11-27 06:12:41.165585] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:36.330 { 00:17:36.330 "params": { 00:17:36.330 "name": "Nvme$subsystem", 00:17:36.330 "trtype": "$TEST_TRANSPORT", 00:17:36.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:36.330 "adrfam": "ipv4", 00:17:36.330 "trsvcid": "$NVMF_PORT", 00:17:36.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:36.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:36.330 "hdgst": ${hdgst:-false}, 00:17:36.330 "ddgst": ${ddgst:-false} 00:17:36.330 }, 00:17:36.330 "method": "bdev_nvme_attach_controller" 00:17:36.330 } 00:17:36.330 EOF 00:17:36.330 )") 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:36.330 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:36.330 "params": { 00:17:36.330 "name": "Nvme1", 00:17:36.330 "trtype": "tcp", 00:17:36.330 "traddr": "10.0.0.3", 00:17:36.330 "adrfam": "ipv4", 00:17:36.330 "trsvcid": "4420", 00:17:36.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:36.330 "hdgst": false, 00:17:36.330 "ddgst": false 00:17:36.330 }, 00:17:36.330 "method": "bdev_nvme_attach_controller" 00:17:36.330 }' 00:17:36.330 [2024-11-27 06:12:41.229874] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:17:36.330 [2024-11-27 06:12:41.229997] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71315 ] 00:17:36.330 [2024-11-27 06:12:41.398591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:36.590 [2024-11-27 06:12:41.483952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.590 [2024-11-27 06:12:41.484112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.590 [2024-11-27 06:12:41.484120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.590 [2024-11-27 06:12:41.498649] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:36.849 I/O targets: 00:17:36.849 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:36.849 00:17:36.849 00:17:36.849 CUnit - A unit testing framework for C - Version 2.1-3 00:17:36.849 http://cunit.sourceforge.net/ 00:17:36.849 00:17:36.849 00:17:36.849 Suite: bdevio tests on: Nvme1n1 00:17:36.849 Test: blockdev write read block ...passed 00:17:36.849 Test: blockdev write zeroes read block ...passed 00:17:36.849 Test: blockdev write zeroes read no split ...passed 00:17:36.849 Test: blockdev write zeroes read split ...passed 00:17:36.849 Test: blockdev write zeroes read split partial ...passed 00:17:36.849 Test: blockdev reset ...[2024-11-27 06:12:41.756582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:36.849 [2024-11-27 06:12:41.756754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186e320 (9): Bad file descriptor 00:17:36.849 [2024-11-27 06:12:41.768466] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:36.849 passed 00:17:36.849 Test: blockdev write read 8 blocks ...passed 00:17:36.849 Test: blockdev write read size > 128k ...passed 00:17:36.849 Test: blockdev write read invalid size ...passed 00:17:36.849 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:36.849 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:36.850 Test: blockdev write read max offset ...passed 00:17:36.850 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:36.850 Test: blockdev writev readv 8 blocks ...passed 00:17:36.850 Test: blockdev writev readv 30 x 1block ...passed 00:17:36.850 Test: blockdev writev readv block ...passed 00:17:36.850 Test: blockdev writev readv size > 128k ...passed 00:17:36.850 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:36.850 Test: blockdev comparev and writev ...[2024-11-27 06:12:41.778300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.850 [2024-11-27 06:12:41.778381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.850 [2024-11-27 06:12:41.778432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.850 [2024-11-27 06:12:41.778468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:36.850 [2024-11-27 06:12:41.779000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.850 [2024-11-27 06:12:41.779027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:36.850 [2024-11-27 06:12:41.779044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.850 [2024-11-27 06:12:41.779055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:36.850 [2024-11-27 06:12:41.779859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.850 [2024-11-27 06:12:41.779886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:36.850 [2024-11-27 06:12:41.779942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.850 [2024-11-27 06:12:41.779952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:36.850 [2024-11-27 06:12:41.780487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.850 [2024-11-27 06:12:41.780540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:36.850 [2024-11-27 06:12:41.780572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.850 [2024-11-27 06:12:41.780592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:36.850 passed 00:17:36.850 Test: blockdev nvme passthru rw ...passed 00:17:36.850 Test: blockdev nvme passthru vendor specific ...[2024-11-27 06:12:41.781632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:36.850 [2024-11-27 06:12:41.781656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:36.850 [2024-11-27 06:12:41.781823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:36.850 [2024-11-27 06:12:41.781852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:36.850 [2024-11-27 06:12:41.781961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:36.850 [2024-11-27 06:12:41.781981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:36.850 [2024-11-27 06:12:41.782158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:36.850 [2024-11-27 06:12:41.782187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:36.850 passed 00:17:36.850 Test: blockdev nvme admin passthru ...passed 00:17:36.850 Test: blockdev copy ...passed 00:17:36.850 00:17:36.850 Run Summary: Type Total Ran Passed Failed Inactive 00:17:36.850 suites 1 1 n/a 0 0 00:17:36.850 tests 23 23 23 0 0 00:17:36.850 asserts 152 152 152 0 n/a 00:17:36.850 00:17:36.850 Elapsed time = 0.176 seconds 00:17:37.109 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:37.109 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.109 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:37.109 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.109 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:37.109 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:37.109 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:37.109 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:37.368 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:37.368 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:37.368 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:37.368 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:37.368 rmmod nvme_tcp 00:17:37.368 rmmod nvme_fabrics 00:17:37.368 rmmod nvme_keyring 00:17:37.368 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:37.368 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:37.368 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:37.368 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 71272 ']' 00:17:37.368 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 71272 00:17:37.368 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 71272 ']' 00:17:37.368 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 71272 00:17:37.368 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:37.368 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:37.368 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71272 00:17:37.368 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:37.368 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:37.368 killing process with pid 71272 00:17:37.368 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71272' 00:17:37.368 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 71272 00:17:37.368 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 71272 00:17:37.627 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:37.627 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:37.627 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:37.627 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:37.627 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:37.627 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:37.627 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:37.627 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:37.627 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:37.627 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:37.886 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:37.886 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:37.886 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:37.886 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:37.886 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:37.886 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:37.886 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:37.886 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:37.886 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:37.886 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:37.886 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:37.886 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:37.886 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:37.886 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.886 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.886 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.886 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:17:37.886 00:17:37.886 real 0m3.740s 00:17:37.886 user 0m11.201s 00:17:37.886 sys 0m1.564s 00:17:37.886 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.886 ************************************ 00:17:37.886 END TEST nvmf_bdevio_no_huge 00:17:37.886 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:37.886 ************************************ 00:17:38.145 06:12:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:38.146 ************************************ 00:17:38.146 START TEST nvmf_tls 00:17:38.146 ************************************ 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:38.146 * Looking for test storage... 00:17:38.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:38.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.146 --rc genhtml_branch_coverage=1 00:17:38.146 --rc genhtml_function_coverage=1 00:17:38.146 --rc genhtml_legend=1 00:17:38.146 --rc geninfo_all_blocks=1 00:17:38.146 --rc geninfo_unexecuted_blocks=1 00:17:38.146 00:17:38.146 ' 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:38.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.146 --rc genhtml_branch_coverage=1 00:17:38.146 --rc genhtml_function_coverage=1 00:17:38.146 --rc genhtml_legend=1 00:17:38.146 --rc geninfo_all_blocks=1 00:17:38.146 --rc geninfo_unexecuted_blocks=1 00:17:38.146 00:17:38.146 ' 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:38.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.146 --rc genhtml_branch_coverage=1 00:17:38.146 --rc genhtml_function_coverage=1 00:17:38.146 --rc genhtml_legend=1 00:17:38.146 --rc geninfo_all_blocks=1 00:17:38.146 --rc geninfo_unexecuted_blocks=1 00:17:38.146 00:17:38.146 ' 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:38.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.146 --rc genhtml_branch_coverage=1 00:17:38.146 --rc genhtml_function_coverage=1 00:17:38.146 --rc genhtml_legend=1 00:17:38.146 --rc geninfo_all_blocks=1 00:17:38.146 --rc geninfo_unexecuted_blocks=1 00:17:38.146 00:17:38.146 ' 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.146 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:38.147 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.147 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:38.147 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:38.147 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:38.147 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.147 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:38.405 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:38.405 Cannot find device "nvmf_init_br" 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:38.405 Cannot find device "nvmf_init_br2" 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:38.405 Cannot find device "nvmf_tgt_br" 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:38.405 Cannot find device "nvmf_tgt_br2" 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:38.405 Cannot find device "nvmf_init_br" 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:38.405 Cannot find device "nvmf_init_br2" 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:38.405 Cannot find device "nvmf_tgt_br" 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:38.405 Cannot find device "nvmf_tgt_br2" 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:38.405 Cannot find device "nvmf_br" 00:17:38.405 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:17:38.406 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:38.406 Cannot find device "nvmf_init_if" 00:17:38.406 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:17:38.406 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:38.406 Cannot find device "nvmf_init_if2" 00:17:38.406 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:17:38.406 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:38.406 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:38.406 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:17:38.406 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:38.406 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:38.406 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:17:38.406 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:38.406 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:38.406 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:38.406 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:38.406 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:38.406 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:38.406 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:38.406 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:38.406 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:38.406 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:38.665 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:38.665 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:38.665 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:38.665 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:38.665 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:38.665 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:38.665 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:38.665 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:38.665 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:38.665 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:38.665 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:38.665 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:38.665 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:38.665 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:38.665 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:38.666 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:38.666 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:17:38.666 00:17:38.666 --- 10.0.0.3 ping statistics --- 00:17:38.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.666 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:38.666 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:38.666 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:17:38.666 00:17:38.666 --- 10.0.0.4 ping statistics --- 00:17:38.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.666 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:38.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:38.666 00:17:38.666 --- 10.0.0.1 ping statistics --- 00:17:38.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.666 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:38.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:17:38.666 00:17:38.666 --- 10.0.0.2 ping statistics --- 00:17:38.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.666 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71551 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71551 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71551 ']' 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.666 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.666 [2024-11-27 06:12:43.754231] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:17:38.666 [2024-11-27 06:12:43.754321] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.926 [2024-11-27 06:12:43.909562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.926 [2024-11-27 06:12:43.978805] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.926 [2024-11-27 06:12:43.978886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.926 [2024-11-27 06:12:43.978900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.926 [2024-11-27 06:12:43.978911] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.926 [2024-11-27 06:12:43.978920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.926 [2024-11-27 06:12:43.979409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.926 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.926 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:38.926 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:38.926 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:38.926 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:39.185 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.185 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:39.185 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:39.444 true 00:17:39.444 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:39.444 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:39.703 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:39.703 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:39.703 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:39.962 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:39.962 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:40.220 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:40.220 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:40.220 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:40.478 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:40.478 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:40.737 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:40.737 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:40.737 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:40.737 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:40.996 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:40.996 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:40.996 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:41.254 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:41.254 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:41.512 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:41.512 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:41.512 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:41.769 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:41.769 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:42.027 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:42.027 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:42.028 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:42.028 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:42.028 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:42.028 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:42.028 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:42.028 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:42.028 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:42.028 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:42.028 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:42.028 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:42.028 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:42.028 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:42.028 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:42.028 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:42.028 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:42.028 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:42.028 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:42.028 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.PNSq9Wm43L 00:17:42.028 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:42.285 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.0Tdx4P45zX 00:17:42.285 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:42.285 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:42.285 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.PNSq9Wm43L 00:17:42.285 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.0Tdx4P45zX 00:17:42.286 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:42.286 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:42.866 [2024-11-27 06:12:47.709102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:42.866 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.PNSq9Wm43L 00:17:42.866 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.PNSq9Wm43L 00:17:42.866 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:43.124 [2024-11-27 06:12:48.057065] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.124 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:43.382 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:43.640 [2024-11-27 06:12:48.557124] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:43.640 [2024-11-27 06:12:48.557425] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:43.640 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:43.898 malloc0 00:17:43.898 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:44.157 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.PNSq9Wm43L 00:17:44.415 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:44.415 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.PNSq9Wm43L 00:17:56.713 Initializing NVMe Controllers 00:17:56.713 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:56.713 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:56.713 Initialization complete. Launching workers. 00:17:56.713 ======================================================== 00:17:56.713 Latency(us) 00:17:56.713 Device Information : IOPS MiB/s Average min max 00:17:56.713 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9324.99 36.43 6864.94 1554.58 13953.55 00:17:56.713 ======================================================== 00:17:56.713 Total : 9324.99 36.43 6864.94 1554.58 13953.55 00:17:56.713 00:17:56.713 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PNSq9Wm43L 00:17:56.713 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:56.713 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:56.713 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:56.713 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PNSq9Wm43L 00:17:56.713 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:56.713 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71782 00:17:56.713 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:56.714 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:56.714 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71782 /var/tmp/bdevperf.sock 00:17:56.714 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71782 ']' 00:17:56.714 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.714 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:56.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.714 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.714 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:56.714 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.714 [2024-11-27 06:12:59.752649] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:17:56.714 [2024-11-27 06:12:59.752743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71782 ] 00:17:56.714 [2024-11-27 06:12:59.906351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.714 [2024-11-27 06:12:59.969426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.714 [2024-11-27 06:13:00.031319] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:56.714 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.714 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:56.714 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PNSq9Wm43L 00:17:56.714 06:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:56.714 [2024-11-27 06:13:01.355606] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:56.714 TLSTESTn1 00:17:56.714 06:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:56.714 Running I/O for 10 seconds... 00:17:58.584 4286.00 IOPS, 16.74 MiB/s [2024-11-27T06:13:04.625Z] 4331.00 IOPS, 16.92 MiB/s [2024-11-27T06:13:05.594Z] 4389.00 IOPS, 17.14 MiB/s [2024-11-27T06:13:06.973Z] 4423.75 IOPS, 17.28 MiB/s [2024-11-27T06:13:07.908Z] 4454.20 IOPS, 17.40 MiB/s [2024-11-27T06:13:08.844Z] 4447.50 IOPS, 17.37 MiB/s [2024-11-27T06:13:09.779Z] 4432.43 IOPS, 17.31 MiB/s [2024-11-27T06:13:10.713Z] 4420.75 IOPS, 17.27 MiB/s [2024-11-27T06:13:11.661Z] 4428.33 IOPS, 17.30 MiB/s [2024-11-27T06:13:11.661Z] 4443.10 IOPS, 17.36 MiB/s 00:18:06.564 Latency(us) 00:18:06.564 [2024-11-27T06:13:11.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.564 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:06.564 Verification LBA range: start 0x0 length 0x2000 00:18:06.564 TLSTESTn1 : 10.02 4449.07 17.38 0.00 0.00 28719.11 5272.67 24784.52 00:18:06.564 [2024-11-27T06:13:11.661Z] =================================================================================================================== 00:18:06.564 [2024-11-27T06:13:11.661Z] Total : 4449.07 17.38 0.00 0.00 28719.11 5272.67 24784.52 00:18:06.564 { 00:18:06.564 "results": [ 00:18:06.564 { 00:18:06.564 "job": "TLSTESTn1", 00:18:06.564 "core_mask": "0x4", 00:18:06.564 "workload": "verify", 00:18:06.564 "status": "finished", 00:18:06.564 "verify_range": { 00:18:06.564 "start": 0, 00:18:06.564 "length": 8192 00:18:06.564 }, 00:18:06.564 "queue_depth": 128, 00:18:06.564 "io_size": 4096, 00:18:06.564 "runtime": 10.015125, 00:18:06.564 "iops": 4449.070780444577, 00:18:06.564 "mibps": 17.37918273611163, 00:18:06.564 "io_failed": 0, 00:18:06.564 "io_timeout": 0, 00:18:06.564 "avg_latency_us": 28719.11112837609, 00:18:06.564 "min_latency_us": 5272.669090909091, 00:18:06.564 "max_latency_us": 24784.523636363636 00:18:06.564 } 00:18:06.564 ], 00:18:06.564 "core_count": 1 00:18:06.564 } 00:18:06.564 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:06.564 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71782 00:18:06.564 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71782 ']' 00:18:06.564 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71782 00:18:06.564 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:06.564 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.564 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71782 00:18:06.564 killing process with pid 71782 00:18:06.564 Received shutdown signal, test time was about 10.000000 seconds 00:18:06.564 00:18:06.564 Latency(us) 00:18:06.564 [2024-11-27T06:13:11.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.564 [2024-11-27T06:13:11.661Z] =================================================================================================================== 00:18:06.564 [2024-11-27T06:13:11.661Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:06.564 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:06.564 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:06.564 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71782' 00:18:06.564 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71782 00:18:06.564 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71782 00:18:06.823 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0Tdx4P45zX 00:18:06.823 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:06.823 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0Tdx4P45zX 00:18:06.823 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:06.823 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:06.823 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:06.823 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:06.823 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0Tdx4P45zX 00:18:06.823 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:06.823 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:06.823 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:06.823 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0Tdx4P45zX 00:18:06.823 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:06.823 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71922 00:18:06.823 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:06.823 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:06.823 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71922 /var/tmp/bdevperf.sock 00:18:06.823 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71922 ']' 00:18:06.823 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:06.823 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.823 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:06.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:06.823 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.823 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.823 [2024-11-27 06:13:11.876890] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:18:06.823 [2024-11-27 06:13:11.877721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71922 ] 00:18:07.082 [2024-11-27 06:13:12.024852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.082 [2024-11-27 06:13:12.070687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.082 [2024-11-27 06:13:12.124608] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:07.341 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.341 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:07.341 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0Tdx4P45zX 00:18:07.599 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:07.858 [2024-11-27 06:13:12.704690] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:07.858 [2024-11-27 06:13:12.714734] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:07.858 [2024-11-27 06:13:12.715540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147bff0 (107): Transport endpoint is not connected 00:18:07.858 [2024-11-27 06:13:12.716541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147bff0 (9): Bad file descriptor 00:18:07.858 [2024-11-27 06:13:12.717537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:07.858 [2024-11-27 06:13:12.717572] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:18:07.858 [2024-11-27 06:13:12.717582] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:07.858 [2024-11-27 06:13:12.717596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:07.858 request: 00:18:07.858 { 00:18:07.858 "name": "TLSTEST", 00:18:07.858 "trtype": "tcp", 00:18:07.858 "traddr": "10.0.0.3", 00:18:07.858 "adrfam": "ipv4", 00:18:07.858 "trsvcid": "4420", 00:18:07.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.858 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:07.858 "prchk_reftag": false, 00:18:07.858 "prchk_guard": false, 00:18:07.858 "hdgst": false, 00:18:07.858 "ddgst": false, 00:18:07.858 "psk": "key0", 00:18:07.858 "allow_unrecognized_csi": false, 00:18:07.858 "method": "bdev_nvme_attach_controller", 00:18:07.858 "req_id": 1 00:18:07.858 } 00:18:07.858 Got JSON-RPC error response 00:18:07.858 response: 00:18:07.858 { 00:18:07.858 "code": -5, 00:18:07.858 "message": "Input/output error" 00:18:07.858 } 00:18:07.858 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71922 00:18:07.858 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71922 ']' 00:18:07.858 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71922 00:18:07.858 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:07.858 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.858 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71922 00:18:07.858 killing process with pid 71922 00:18:07.858 Received shutdown signal, test time was about 10.000000 seconds 00:18:07.858 00:18:07.858 Latency(us) 00:18:07.858 [2024-11-27T06:13:12.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.858 [2024-11-27T06:13:12.955Z] =================================================================================================================== 00:18:07.858 [2024-11-27T06:13:12.955Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:07.858 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:07.858 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:07.858 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71922' 00:18:07.858 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71922 00:18:07.858 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71922 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.PNSq9Wm43L 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.PNSq9Wm43L 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:08.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.PNSq9Wm43L 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PNSq9Wm43L 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71943 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71943 /var/tmp/bdevperf.sock 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71943 ']' 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.117 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.117 [2024-11-27 06:13:13.031121] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:18:08.117 [2024-11-27 06:13:13.031260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71943 ] 00:18:08.117 [2024-11-27 06:13:13.180489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.376 [2024-11-27 06:13:13.240050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.376 [2024-11-27 06:13:13.295125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:08.376 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.376 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:08.376 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PNSq9Wm43L 00:18:08.634 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:08.893 [2024-11-27 06:13:13.900112] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:08.893 [2024-11-27 06:13:13.904920] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:08.893 [2024-11-27 06:13:13.904974] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:08.893 [2024-11-27 06:13:13.905042] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:08.893 [2024-11-27 06:13:13.905694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d6ff0 (107): Transport endpoint is not connected 00:18:08.893 [2024-11-27 06:13:13.906682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d6ff0 (9): Bad file descriptor 00:18:08.893 [2024-11-27 06:13:13.907679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:08.893 [2024-11-27 06:13:13.907715] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:18:08.893 [2024-11-27 06:13:13.907741] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:08.893 [2024-11-27 06:13:13.907754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:08.893 request: 00:18:08.893 { 00:18:08.893 "name": "TLSTEST", 00:18:08.893 "trtype": "tcp", 00:18:08.893 "traddr": "10.0.0.3", 00:18:08.893 "adrfam": "ipv4", 00:18:08.893 "trsvcid": "4420", 00:18:08.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.893 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:08.893 "prchk_reftag": false, 00:18:08.893 "prchk_guard": false, 00:18:08.893 "hdgst": false, 00:18:08.893 "ddgst": false, 00:18:08.893 "psk": "key0", 00:18:08.893 "allow_unrecognized_csi": false, 00:18:08.893 "method": "bdev_nvme_attach_controller", 00:18:08.893 "req_id": 1 00:18:08.893 } 00:18:08.893 Got JSON-RPC error response 00:18:08.893 response: 00:18:08.893 { 00:18:08.893 "code": -5, 00:18:08.893 "message": "Input/output error" 00:18:08.893 } 00:18:08.893 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71943 00:18:08.893 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71943 ']' 00:18:08.893 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71943 00:18:08.893 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:08.893 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.893 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71943 00:18:08.893 killing process with pid 71943 00:18:08.893 Received shutdown signal, test time was about 10.000000 seconds 00:18:08.893 00:18:08.893 Latency(us) 00:18:08.893 [2024-11-27T06:13:13.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.893 [2024-11-27T06:13:13.990Z] =================================================================================================================== 00:18:08.893 [2024-11-27T06:13:13.990Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:08.893 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:08.893 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:08.893 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71943' 00:18:08.893 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71943 00:18:08.893 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71943 00:18:09.152 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:09.152 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:09.152 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:09.152 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:09.152 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:09.152 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.PNSq9Wm43L 00:18:09.152 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:09.152 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.PNSq9Wm43L 00:18:09.152 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:09.152 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.152 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:09.152 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.152 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.PNSq9Wm43L 00:18:09.152 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:09.152 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:09.152 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:09.152 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PNSq9Wm43L 00:18:09.152 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:09.152 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71964 00:18:09.152 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:09.152 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:09.152 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71964 /var/tmp/bdevperf.sock 00:18:09.153 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71964 ']' 00:18:09.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:09.153 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:09.153 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.153 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:09.153 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.153 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.153 [2024-11-27 06:13:14.219226] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:18:09.153 [2024-11-27 06:13:14.219326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71964 ] 00:18:09.412 [2024-11-27 06:13:14.362776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.412 [2024-11-27 06:13:14.410221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.412 [2024-11-27 06:13:14.467564] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:10.346 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.346 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:10.346 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PNSq9Wm43L 00:18:10.646 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:10.646 [2024-11-27 06:13:15.672834] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:10.646 [2024-11-27 06:13:15.677638] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:10.646 [2024-11-27 06:13:15.677675] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:10.646 [2024-11-27 06:13:15.677737] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:10.646 [2024-11-27 06:13:15.678374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d7ff0 (107): Transport endpoint is not connected 00:18:10.646 [2024-11-27 06:13:15.679359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d7ff0 (9): Bad file descriptor 00:18:10.646 [2024-11-27 06:13:15.680356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:10.646 [2024-11-27 06:13:15.680398] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:18:10.646 [2024-11-27 06:13:15.680409] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:10.646 [2024-11-27 06:13:15.680425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:10.646 request: 00:18:10.646 { 00:18:10.646 "name": "TLSTEST", 00:18:10.646 "trtype": "tcp", 00:18:10.646 "traddr": "10.0.0.3", 00:18:10.646 "adrfam": "ipv4", 00:18:10.646 "trsvcid": "4420", 00:18:10.646 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:10.646 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:10.646 "prchk_reftag": false, 00:18:10.646 "prchk_guard": false, 00:18:10.646 "hdgst": false, 00:18:10.646 "ddgst": false, 00:18:10.646 "psk": "key0", 00:18:10.646 "allow_unrecognized_csi": false, 00:18:10.646 "method": "bdev_nvme_attach_controller", 00:18:10.646 "req_id": 1 00:18:10.646 } 00:18:10.646 Got JSON-RPC error response 00:18:10.646 response: 00:18:10.646 { 00:18:10.646 "code": -5, 00:18:10.646 "message": "Input/output error" 00:18:10.646 } 00:18:10.646 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71964 00:18:10.646 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71964 ']' 00:18:10.646 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71964 00:18:10.646 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:10.646 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.646 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71964 00:18:10.914 killing process with pid 71964 00:18:10.914 Received shutdown signal, test time was about 10.000000 seconds 00:18:10.914 00:18:10.914 Latency(us) 00:18:10.914 [2024-11-27T06:13:16.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.914 [2024-11-27T06:13:16.011Z] =================================================================================================================== 00:18:10.914 [2024-11-27T06:13:16.011Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:10.914 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:10.914 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:10.914 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71964' 00:18:10.914 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71964 00:18:10.914 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71964 00:18:10.914 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:10.914 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:10.914 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:10.914 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:10.915 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:10.915 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:10.915 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:10.915 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:10.915 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:10.915 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.915 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:10.915 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.915 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:10.915 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:10.915 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:10.915 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:10.915 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:10.915 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:10.915 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71993 00:18:10.915 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:10.915 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:10.915 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71993 /var/tmp/bdevperf.sock 00:18:10.915 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71993 ']' 00:18:10.915 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.915 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.915 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.915 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.915 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.915 [2024-11-27 06:13:15.989327] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:18:10.915 [2024-11-27 06:13:15.989420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71993 ] 00:18:11.173 [2024-11-27 06:13:16.137261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.173 [2024-11-27 06:13:16.188442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.173 [2024-11-27 06:13:16.251178] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:11.432 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.432 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:11.432 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:11.690 [2024-11-27 06:13:16.593044] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:11.690 [2024-11-27 06:13:16.593179] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:11.690 request: 00:18:11.690 { 00:18:11.690 "name": "key0", 00:18:11.690 "path": "", 00:18:11.690 "method": "keyring_file_add_key", 00:18:11.690 "req_id": 1 00:18:11.690 } 00:18:11.690 Got JSON-RPC error response 00:18:11.690 response: 00:18:11.690 { 00:18:11.690 "code": -1, 00:18:11.690 "message": "Operation not permitted" 00:18:11.690 } 00:18:11.690 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:11.948 [2024-11-27 06:13:16.837341] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:11.948 [2024-11-27 06:13:16.837513] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:11.948 request: 00:18:11.948 { 00:18:11.948 "name": "TLSTEST", 00:18:11.948 "trtype": "tcp", 00:18:11.948 "traddr": "10.0.0.3", 00:18:11.948 "adrfam": "ipv4", 00:18:11.948 "trsvcid": "4420", 00:18:11.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.948 "prchk_reftag": false, 00:18:11.948 "prchk_guard": false, 00:18:11.948 "hdgst": false, 00:18:11.948 "ddgst": false, 00:18:11.948 "psk": "key0", 00:18:11.948 "allow_unrecognized_csi": false, 00:18:11.948 "method": "bdev_nvme_attach_controller", 00:18:11.948 "req_id": 1 00:18:11.948 } 00:18:11.948 Got JSON-RPC error response 00:18:11.948 response: 00:18:11.948 { 00:18:11.948 "code": -126, 00:18:11.948 "message": "Required key not available" 00:18:11.948 } 00:18:11.948 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71993 00:18:11.948 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71993 ']' 00:18:11.948 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71993 00:18:11.948 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:11.948 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.948 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71993 00:18:11.948 killing process with pid 71993 00:18:11.948 Received shutdown signal, test time was about 10.000000 seconds 00:18:11.948 00:18:11.948 Latency(us) 00:18:11.948 [2024-11-27T06:13:17.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.948 [2024-11-27T06:13:17.045Z] =================================================================================================================== 00:18:11.948 [2024-11-27T06:13:17.045Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:11.948 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:11.948 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:11.948 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71993' 00:18:11.948 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71993 00:18:11.948 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71993 00:18:12.207 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:12.207 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:12.207 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.207 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:12.207 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.207 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71551 00:18:12.207 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71551 ']' 00:18:12.207 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71551 00:18:12.207 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:12.207 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.207 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71551 00:18:12.207 killing process with pid 71551 00:18:12.207 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:12.207 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:12.207 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71551' 00:18:12.207 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71551 00:18:12.207 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71551 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.JzkmDIITwb 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.JzkmDIITwb 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72028 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72028 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72028 ']' 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:12.465 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.465 [2024-11-27 06:13:17.545220] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:18:12.465 [2024-11-27 06:13:17.545334] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:12.724 [2024-11-27 06:13:17.695349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.724 [2024-11-27 06:13:17.737592] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:12.724 [2024-11-27 06:13:17.737663] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:12.724 [2024-11-27 06:13:17.737689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:12.724 [2024-11-27 06:13:17.737697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:12.724 [2024-11-27 06:13:17.737704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:12.724 [2024-11-27 06:13:17.738133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.724 [2024-11-27 06:13:17.792674] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:12.982 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.982 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:12.982 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:12.982 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:12.982 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.982 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.982 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.JzkmDIITwb 00:18:12.982 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JzkmDIITwb 00:18:12.982 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:13.240 [2024-11-27 06:13:18.146046] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:13.240 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:13.499 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:13.757 [2024-11-27 06:13:18.738262] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:13.757 [2024-11-27 06:13:18.738547] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:13.757 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:14.015 malloc0 00:18:14.015 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:14.273 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JzkmDIITwb 00:18:14.531 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:14.789 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JzkmDIITwb 00:18:14.789 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:14.789 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:14.789 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:14.789 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JzkmDIITwb 00:18:14.790 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.790 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72073 00:18:14.790 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:14.790 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:14.790 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72073 /var/tmp/bdevperf.sock 00:18:14.790 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72073 ']' 00:18:14.790 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.790 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.790 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.790 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.790 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.790 [2024-11-27 06:13:19.743663] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:18:14.790 [2024-11-27 06:13:19.743780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72073 ] 00:18:15.049 [2024-11-27 06:13:19.899055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.049 [2024-11-27 06:13:19.982004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.049 [2024-11-27 06:13:20.060069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:15.617 06:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.617 06:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:15.617 06:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JzkmDIITwb 00:18:16.186 06:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:16.186 [2024-11-27 06:13:21.178932] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:16.186 TLSTESTn1 00:18:16.186 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:16.446 Running I/O for 10 seconds... 00:18:18.316 3887.00 IOPS, 15.18 MiB/s [2024-11-27T06:13:24.788Z] 3928.00 IOPS, 15.34 MiB/s [2024-11-27T06:13:25.725Z] 3904.00 IOPS, 15.25 MiB/s [2024-11-27T06:13:26.678Z] 3894.50 IOPS, 15.21 MiB/s [2024-11-27T06:13:27.617Z] 3883.80 IOPS, 15.17 MiB/s [2024-11-27T06:13:28.554Z] 3897.00 IOPS, 15.22 MiB/s [2024-11-27T06:13:29.492Z] 3907.00 IOPS, 15.26 MiB/s [2024-11-27T06:13:30.428Z] 3902.25 IOPS, 15.24 MiB/s [2024-11-27T06:13:31.806Z] 3887.00 IOPS, 15.18 MiB/s [2024-11-27T06:13:31.806Z] 3877.00 IOPS, 15.14 MiB/s 00:18:26.709 Latency(us) 00:18:26.709 [2024-11-27T06:13:31.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.709 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:26.709 Verification LBA range: start 0x0 length 0x2000 00:18:26.709 TLSTESTn1 : 10.02 3883.23 15.17 0.00 0.00 32908.96 5034.36 27167.65 00:18:26.709 [2024-11-27T06:13:31.806Z] =================================================================================================================== 00:18:26.709 [2024-11-27T06:13:31.806Z] Total : 3883.23 15.17 0.00 0.00 32908.96 5034.36 27167.65 00:18:26.709 { 00:18:26.709 "results": [ 00:18:26.709 { 00:18:26.709 "job": "TLSTESTn1", 00:18:26.709 "core_mask": "0x4", 00:18:26.709 "workload": "verify", 00:18:26.709 "status": "finished", 00:18:26.709 "verify_range": { 00:18:26.709 "start": 0, 00:18:26.709 "length": 8192 00:18:26.709 }, 00:18:26.709 "queue_depth": 128, 00:18:26.709 "io_size": 4096, 00:18:26.709 "runtime": 10.016668, 00:18:26.709 "iops": 3883.227436508827, 00:18:26.709 "mibps": 15.168857173862605, 00:18:26.709 "io_failed": 0, 00:18:26.709 "io_timeout": 0, 00:18:26.709 "avg_latency_us": 32908.95960099751, 00:18:26.709 "min_latency_us": 5034.356363636363, 00:18:26.709 "max_latency_us": 27167.65090909091 00:18:26.709 } 00:18:26.709 ], 00:18:26.709 "core_count": 1 00:18:26.709 } 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 72073 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72073 ']' 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72073 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72073 00:18:26.709 killing process with pid 72073 00:18:26.709 Received shutdown signal, test time was about 10.000000 seconds 00:18:26.709 00:18:26.709 Latency(us) 00:18:26.709 [2024-11-27T06:13:31.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.709 [2024-11-27T06:13:31.806Z] =================================================================================================================== 00:18:26.709 [2024-11-27T06:13:31.806Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72073' 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72073 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72073 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.JzkmDIITwb 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JzkmDIITwb 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JzkmDIITwb 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JzkmDIITwb 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JzkmDIITwb 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72214 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72214 /var/tmp/bdevperf.sock 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72214 ']' 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:26.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.709 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.969 [2024-11-27 06:13:31.817048] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:18:26.969 [2024-11-27 06:13:31.817185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72214 ] 00:18:26.969 [2024-11-27 06:13:31.960077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.969 [2024-11-27 06:13:32.036624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.228 [2024-11-27 06:13:32.120905] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:27.228 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.228 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:27.228 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JzkmDIITwb 00:18:27.487 [2024-11-27 06:13:32.439840] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.JzkmDIITwb': 0100666 00:18:27.487 [2024-11-27 06:13:32.439902] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:27.487 request: 00:18:27.487 { 00:18:27.487 "name": "key0", 00:18:27.487 "path": "/tmp/tmp.JzkmDIITwb", 00:18:27.487 "method": "keyring_file_add_key", 00:18:27.487 "req_id": 1 00:18:27.487 } 00:18:27.487 Got JSON-RPC error response 00:18:27.487 response: 00:18:27.487 { 00:18:27.487 "code": -1, 00:18:27.487 "message": "Operation not permitted" 00:18:27.487 } 00:18:27.487 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:27.746 [2024-11-27 06:13:32.752057] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:27.746 [2024-11-27 06:13:32.752201] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:27.746 request: 00:18:27.746 { 00:18:27.746 "name": "TLSTEST", 00:18:27.746 "trtype": "tcp", 00:18:27.746 "traddr": "10.0.0.3", 00:18:27.746 "adrfam": "ipv4", 00:18:27.746 "trsvcid": "4420", 00:18:27.746 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.746 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:27.746 "prchk_reftag": false, 00:18:27.746 "prchk_guard": false, 00:18:27.746 "hdgst": false, 00:18:27.746 "ddgst": false, 00:18:27.746 "psk": "key0", 00:18:27.746 "allow_unrecognized_csi": false, 00:18:27.746 "method": "bdev_nvme_attach_controller", 00:18:27.746 "req_id": 1 00:18:27.746 } 00:18:27.746 Got JSON-RPC error response 00:18:27.746 response: 00:18:27.746 { 00:18:27.746 "code": -126, 00:18:27.746 "message": "Required key not available" 00:18:27.746 } 00:18:27.746 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72214 00:18:27.746 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72214 ']' 00:18:27.746 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72214 00:18:27.746 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:27.746 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.746 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72214 00:18:27.746 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:27.746 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:27.746 killing process with pid 72214 00:18:27.746 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72214' 00:18:27.747 Received shutdown signal, test time was about 10.000000 seconds 00:18:27.747 00:18:27.747 Latency(us) 00:18:27.747 [2024-11-27T06:13:32.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.747 [2024-11-27T06:13:32.844Z] =================================================================================================================== 00:18:27.747 [2024-11-27T06:13:32.844Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:27.747 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72214 00:18:27.747 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72214 00:18:28.314 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:28.314 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:28.314 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.314 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.314 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.314 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 72028 00:18:28.314 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72028 ']' 00:18:28.315 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72028 00:18:28.315 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:28.315 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.315 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72028 00:18:28.315 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:28.315 killing process with pid 72028 00:18:28.315 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:28.315 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72028' 00:18:28.315 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72028 00:18:28.315 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72028 00:18:28.315 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:28.315 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:28.315 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.315 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.315 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72246 00:18:28.315 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:28.315 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72246 00:18:28.315 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72246 ']' 00:18:28.315 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.315 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.315 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.315 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.315 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.574 [2024-11-27 06:13:33.452838] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:18:28.574 [2024-11-27 06:13:33.452954] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.574 [2024-11-27 06:13:33.596347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.574 [2024-11-27 06:13:33.662333] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.574 [2024-11-27 06:13:33.662399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.574 [2024-11-27 06:13:33.662410] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.574 [2024-11-27 06:13:33.662418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.574 [2024-11-27 06:13:33.662425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.574 [2024-11-27 06:13:33.662873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.833 [2024-11-27 06:13:33.721844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:29.400 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.400 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:29.400 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:29.400 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:29.400 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.659 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.659 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.JzkmDIITwb 00:18:29.659 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:29.659 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.JzkmDIITwb 00:18:29.659 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:29.659 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.659 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:29.659 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.659 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.JzkmDIITwb 00:18:29.659 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JzkmDIITwb 00:18:29.659 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:29.918 [2024-11-27 06:13:34.808988] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:29.918 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:30.176 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:30.435 [2024-11-27 06:13:35.333049] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:30.435 [2024-11-27 06:13:35.333278] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:30.435 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:30.694 malloc0 00:18:30.694 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:30.952 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JzkmDIITwb 00:18:30.952 [2024-11-27 06:13:36.037226] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.JzkmDIITwb': 0100666 00:18:30.952 [2024-11-27 06:13:36.037315] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:30.952 request: 00:18:30.953 { 00:18:30.953 "name": "key0", 00:18:30.953 "path": "/tmp/tmp.JzkmDIITwb", 00:18:30.953 "method": "keyring_file_add_key", 00:18:30.953 "req_id": 1 00:18:30.953 } 00:18:30.953 Got JSON-RPC error response 00:18:30.953 response: 00:18:30.953 { 00:18:30.953 "code": -1, 00:18:30.953 "message": "Operation not permitted" 00:18:30.953 } 00:18:31.211 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:31.211 [2024-11-27 06:13:36.273328] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:31.211 [2024-11-27 06:13:36.273390] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:31.211 request: 00:18:31.211 { 00:18:31.212 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.212 "host": "nqn.2016-06.io.spdk:host1", 00:18:31.212 "psk": "key0", 00:18:31.212 "method": "nvmf_subsystem_add_host", 00:18:31.212 "req_id": 1 00:18:31.212 } 00:18:31.212 Got JSON-RPC error response 00:18:31.212 response: 00:18:31.212 { 00:18:31.212 "code": -32603, 00:18:31.212 "message": "Internal error" 00:18:31.212 } 00:18:31.212 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:31.212 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:31.212 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:31.212 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:31.212 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 72246 00:18:31.212 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72246 ']' 00:18:31.212 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72246 00:18:31.212 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:31.212 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.212 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72246 00:18:31.470 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:31.470 killing process with pid 72246 00:18:31.470 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:31.470 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72246' 00:18:31.470 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72246 00:18:31.470 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72246 00:18:31.470 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.JzkmDIITwb 00:18:31.470 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:31.470 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:31.470 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:31.470 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.470 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:31.470 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72315 00:18:31.471 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72315 00:18:31.471 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72315 ']' 00:18:31.471 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.471 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.471 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.471 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.471 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.730 [2024-11-27 06:13:36.582662] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:18:31.730 [2024-11-27 06:13:36.582728] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.730 [2024-11-27 06:13:36.722762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.730 [2024-11-27 06:13:36.762569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.730 [2024-11-27 06:13:36.762652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.730 [2024-11-27 06:13:36.762663] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.730 [2024-11-27 06:13:36.762677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.730 [2024-11-27 06:13:36.762683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.730 [2024-11-27 06:13:36.763036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.730 [2024-11-27 06:13:36.817070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:32.022 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.022 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:32.022 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:32.022 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:32.022 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.022 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.022 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.JzkmDIITwb 00:18:32.022 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JzkmDIITwb 00:18:32.022 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:32.282 [2024-11-27 06:13:37.146864] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.282 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:32.541 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:32.799 [2024-11-27 06:13:37.687032] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:32.799 [2024-11-27 06:13:37.687303] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:32.799 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:33.058 malloc0 00:18:33.058 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:33.317 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JzkmDIITwb 00:18:33.575 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:33.835 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72363 00:18:33.835 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:33.835 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:33.835 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72363 /var/tmp/bdevperf.sock 00:18:33.835 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72363 ']' 00:18:33.835 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.835 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.835 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.835 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.835 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.835 [2024-11-27 06:13:38.727098] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:18:33.835 [2024-11-27 06:13:38.727696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72363 ] 00:18:33.835 [2024-11-27 06:13:38.876915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.094 [2024-11-27 06:13:38.947795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.094 [2024-11-27 06:13:39.006936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:34.661 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.661 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:34.661 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JzkmDIITwb 00:18:34.920 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:35.179 [2024-11-27 06:13:40.145048] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:35.179 TLSTESTn1 00:18:35.179 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:18:35.746 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:35.747 "subsystems": [ 00:18:35.747 { 00:18:35.747 "subsystem": "keyring", 00:18:35.747 "config": [ 00:18:35.747 { 00:18:35.747 "method": "keyring_file_add_key", 00:18:35.747 "params": { 00:18:35.747 "name": "key0", 00:18:35.747 "path": "/tmp/tmp.JzkmDIITwb" 00:18:35.747 } 00:18:35.747 } 00:18:35.747 ] 00:18:35.747 }, 00:18:35.747 { 00:18:35.747 "subsystem": "iobuf", 00:18:35.747 "config": [ 00:18:35.747 { 00:18:35.747 "method": "iobuf_set_options", 00:18:35.747 "params": { 00:18:35.747 "small_pool_count": 8192, 00:18:35.747 "large_pool_count": 1024, 00:18:35.747 "small_bufsize": 8192, 00:18:35.747 "large_bufsize": 135168, 00:18:35.747 "enable_numa": false 00:18:35.747 } 00:18:35.747 } 00:18:35.747 ] 00:18:35.747 }, 00:18:35.747 { 00:18:35.747 "subsystem": "sock", 00:18:35.747 "config": [ 00:18:35.747 { 00:18:35.747 "method": "sock_set_default_impl", 00:18:35.747 "params": { 00:18:35.747 "impl_name": "uring" 00:18:35.747 } 00:18:35.747 }, 00:18:35.747 { 00:18:35.747 "method": "sock_impl_set_options", 00:18:35.747 "params": { 00:18:35.747 "impl_name": "ssl", 00:18:35.747 "recv_buf_size": 4096, 00:18:35.747 "send_buf_size": 4096, 00:18:35.747 "enable_recv_pipe": true, 00:18:35.747 "enable_quickack": false, 00:18:35.747 "enable_placement_id": 0, 00:18:35.747 "enable_zerocopy_send_server": true, 00:18:35.747 "enable_zerocopy_send_client": false, 00:18:35.747 "zerocopy_threshold": 0, 00:18:35.747 "tls_version": 0, 00:18:35.747 "enable_ktls": false 00:18:35.747 } 00:18:35.747 }, 00:18:35.747 { 00:18:35.747 "method": "sock_impl_set_options", 00:18:35.747 "params": { 00:18:35.747 "impl_name": "posix", 00:18:35.747 "recv_buf_size": 2097152, 00:18:35.747 "send_buf_size": 2097152, 00:18:35.747 "enable_recv_pipe": true, 00:18:35.747 "enable_quickack": false, 00:18:35.747 "enable_placement_id": 0, 00:18:35.747 "enable_zerocopy_send_server": true, 00:18:35.747 "enable_zerocopy_send_client": false, 00:18:35.747 "zerocopy_threshold": 0, 00:18:35.747 "tls_version": 0, 00:18:35.747 "enable_ktls": false 00:18:35.747 } 00:18:35.747 }, 00:18:35.747 { 00:18:35.747 "method": "sock_impl_set_options", 00:18:35.747 "params": { 00:18:35.747 "impl_name": "uring", 00:18:35.747 "recv_buf_size": 2097152, 00:18:35.747 "send_buf_size": 2097152, 00:18:35.747 "enable_recv_pipe": true, 00:18:35.747 "enable_quickack": false, 00:18:35.747 "enable_placement_id": 0, 00:18:35.747 "enable_zerocopy_send_server": false, 00:18:35.747 "enable_zerocopy_send_client": false, 00:18:35.747 "zerocopy_threshold": 0, 00:18:35.747 "tls_version": 0, 00:18:35.747 "enable_ktls": false 00:18:35.747 } 00:18:35.747 } 00:18:35.747 ] 00:18:35.747 }, 00:18:35.747 { 00:18:35.747 "subsystem": "vmd", 00:18:35.747 "config": [] 00:18:35.747 }, 00:18:35.747 { 00:18:35.747 "subsystem": "accel", 00:18:35.747 "config": [ 00:18:35.747 { 00:18:35.747 "method": "accel_set_options", 00:18:35.747 "params": { 00:18:35.747 "small_cache_size": 128, 00:18:35.747 "large_cache_size": 16, 00:18:35.747 "task_count": 2048, 00:18:35.747 "sequence_count": 2048, 00:18:35.747 "buf_count": 2048 00:18:35.747 } 00:18:35.747 } 00:18:35.747 ] 00:18:35.747 }, 00:18:35.747 { 00:18:35.747 "subsystem": "bdev", 00:18:35.747 "config": [ 00:18:35.747 { 00:18:35.747 "method": "bdev_set_options", 00:18:35.747 "params": { 00:18:35.747 "bdev_io_pool_size": 65535, 00:18:35.747 "bdev_io_cache_size": 256, 00:18:35.747 "bdev_auto_examine": true, 00:18:35.747 "iobuf_small_cache_size": 128, 00:18:35.747 "iobuf_large_cache_size": 16 00:18:35.747 } 00:18:35.747 }, 00:18:35.747 { 00:18:35.747 "method": "bdev_raid_set_options", 00:18:35.747 "params": { 00:18:35.747 "process_window_size_kb": 1024, 00:18:35.747 "process_max_bandwidth_mb_sec": 0 00:18:35.747 } 00:18:35.747 }, 00:18:35.747 { 00:18:35.747 "method": "bdev_iscsi_set_options", 00:18:35.747 "params": { 00:18:35.747 "timeout_sec": 30 00:18:35.747 } 00:18:35.747 }, 00:18:35.747 { 00:18:35.747 "method": "bdev_nvme_set_options", 00:18:35.747 "params": { 00:18:35.747 "action_on_timeout": "none", 00:18:35.747 "timeout_us": 0, 00:18:35.747 "timeout_admin_us": 0, 00:18:35.747 "keep_alive_timeout_ms": 10000, 00:18:35.747 "arbitration_burst": 0, 00:18:35.747 "low_priority_weight": 0, 00:18:35.747 "medium_priority_weight": 0, 00:18:35.747 "high_priority_weight": 0, 00:18:35.747 "nvme_adminq_poll_period_us": 10000, 00:18:35.747 "nvme_ioq_poll_period_us": 0, 00:18:35.747 "io_queue_requests": 0, 00:18:35.747 "delay_cmd_submit": true, 00:18:35.747 "transport_retry_count": 4, 00:18:35.747 "bdev_retry_count": 3, 00:18:35.747 "transport_ack_timeout": 0, 00:18:35.747 "ctrlr_loss_timeout_sec": 0, 00:18:35.747 "reconnect_delay_sec": 0, 00:18:35.747 "fast_io_fail_timeout_sec": 0, 00:18:35.747 "disable_auto_failback": false, 00:18:35.747 "generate_uuids": false, 00:18:35.747 "transport_tos": 0, 00:18:35.747 "nvme_error_stat": false, 00:18:35.747 "rdma_srq_size": 0, 00:18:35.747 "io_path_stat": false, 00:18:35.747 "allow_accel_sequence": false, 00:18:35.747 "rdma_max_cq_size": 0, 00:18:35.747 "rdma_cm_event_timeout_ms": 0, 00:18:35.747 "dhchap_digests": [ 00:18:35.747 "sha256", 00:18:35.747 "sha384", 00:18:35.747 "sha512" 00:18:35.747 ], 00:18:35.747 "dhchap_dhgroups": [ 00:18:35.747 "null", 00:18:35.747 "ffdhe2048", 00:18:35.747 "ffdhe3072", 00:18:35.747 "ffdhe4096", 00:18:35.747 "ffdhe6144", 00:18:35.747 "ffdhe8192" 00:18:35.747 ] 00:18:35.747 } 00:18:35.747 }, 00:18:35.747 { 00:18:35.747 "method": "bdev_nvme_set_hotplug", 00:18:35.747 "params": { 00:18:35.747 "period_us": 100000, 00:18:35.747 "enable": false 00:18:35.747 } 00:18:35.747 }, 00:18:35.747 { 00:18:35.747 "method": "bdev_malloc_create", 00:18:35.747 "params": { 00:18:35.747 "name": "malloc0", 00:18:35.747 "num_blocks": 8192, 00:18:35.747 "block_size": 4096, 00:18:35.747 "physical_block_size": 4096, 00:18:35.747 "uuid": "e10ce317-cc6e-4cb8-af83-56cddee95a15", 00:18:35.747 "optimal_io_boundary": 0, 00:18:35.747 "md_size": 0, 00:18:35.747 "dif_type": 0, 00:18:35.747 "dif_is_head_of_md": false, 00:18:35.747 "dif_pi_format": 0 00:18:35.747 } 00:18:35.747 }, 00:18:35.747 { 00:18:35.747 "method": "bdev_wait_for_examine" 00:18:35.747 } 00:18:35.747 ] 00:18:35.747 }, 00:18:35.747 { 00:18:35.747 "subsystem": "nbd", 00:18:35.747 "config": [] 00:18:35.747 }, 00:18:35.747 { 00:18:35.747 "subsystem": "scheduler", 00:18:35.747 "config": [ 00:18:35.747 { 00:18:35.747 "method": "framework_set_scheduler", 00:18:35.747 "params": { 00:18:35.747 "name": "static" 00:18:35.747 } 00:18:35.747 } 00:18:35.747 ] 00:18:35.747 }, 00:18:35.747 { 00:18:35.747 "subsystem": "nvmf", 00:18:35.747 "config": [ 00:18:35.747 { 00:18:35.747 "method": "nvmf_set_config", 00:18:35.747 "params": { 00:18:35.747 "discovery_filter": "match_any", 00:18:35.747 "admin_cmd_passthru": { 00:18:35.747 "identify_ctrlr": false 00:18:35.747 }, 00:18:35.747 "dhchap_digests": [ 00:18:35.747 "sha256", 00:18:35.747 "sha384", 00:18:35.747 "sha512" 00:18:35.747 ], 00:18:35.747 "dhchap_dhgroups": [ 00:18:35.747 "null", 00:18:35.747 "ffdhe2048", 00:18:35.747 "ffdhe3072", 00:18:35.747 "ffdhe4096", 00:18:35.747 "ffdhe6144", 00:18:35.748 "ffdhe8192" 00:18:35.748 ] 00:18:35.748 } 00:18:35.748 }, 00:18:35.748 { 00:18:35.748 "method": "nvmf_set_max_subsystems", 00:18:35.748 "params": { 00:18:35.748 "max_subsystems": 1024 00:18:35.748 } 00:18:35.748 }, 00:18:35.748 { 00:18:35.748 "method": "nvmf_set_crdt", 00:18:35.748 "params": { 00:18:35.748 "crdt1": 0, 00:18:35.748 "crdt2": 0, 00:18:35.748 "crdt3": 0 00:18:35.748 } 00:18:35.748 }, 00:18:35.748 { 00:18:35.748 "method": "nvmf_create_transport", 00:18:35.748 "params": { 00:18:35.748 "trtype": "TCP", 00:18:35.748 "max_queue_depth": 128, 00:18:35.748 "max_io_qpairs_per_ctrlr": 127, 00:18:35.748 "in_capsule_data_size": 4096, 00:18:35.748 "max_io_size": 131072, 00:18:35.748 "io_unit_size": 131072, 00:18:35.748 "max_aq_depth": 128, 00:18:35.748 "num_shared_buffers": 511, 00:18:35.748 "buf_cache_size": 4294967295, 00:18:35.748 "dif_insert_or_strip": false, 00:18:35.748 "zcopy": false, 00:18:35.748 "c2h_success": false, 00:18:35.748 "sock_priority": 0, 00:18:35.748 "abort_timeout_sec": 1, 00:18:35.748 "ack_timeout": 0, 00:18:35.748 "data_wr_pool_size": 0 00:18:35.748 } 00:18:35.748 }, 00:18:35.748 { 00:18:35.748 "method": "nvmf_create_subsystem", 00:18:35.748 "params": { 00:18:35.748 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.748 "allow_any_host": false, 00:18:35.748 "serial_number": "SPDK00000000000001", 00:18:35.748 "model_number": "SPDK bdev Controller", 00:18:35.748 "max_namespaces": 10, 00:18:35.748 "min_cntlid": 1, 00:18:35.748 "max_cntlid": 65519, 00:18:35.748 "ana_reporting": false 00:18:35.748 } 00:18:35.748 }, 00:18:35.748 { 00:18:35.748 "method": "nvmf_subsystem_add_host", 00:18:35.748 "params": { 00:18:35.748 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.748 "host": "nqn.2016-06.io.spdk:host1", 00:18:35.748 "psk": "key0" 00:18:35.748 } 00:18:35.748 }, 00:18:35.748 { 00:18:35.748 "method": "nvmf_subsystem_add_ns", 00:18:35.748 "params": { 00:18:35.748 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.748 "namespace": { 00:18:35.748 "nsid": 1, 00:18:35.748 "bdev_name": "malloc0", 00:18:35.748 "nguid": "E10CE317CC6E4CB8AF8356CDDEE95A15", 00:18:35.748 "uuid": "e10ce317-cc6e-4cb8-af83-56cddee95a15", 00:18:35.748 "no_auto_visible": false 00:18:35.748 } 00:18:35.748 } 00:18:35.748 }, 00:18:35.748 { 00:18:35.748 "method": "nvmf_subsystem_add_listener", 00:18:35.748 "params": { 00:18:35.748 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.748 "listen_address": { 00:18:35.748 "trtype": "TCP", 00:18:35.748 "adrfam": "IPv4", 00:18:35.748 "traddr": "10.0.0.3", 00:18:35.748 "trsvcid": "4420" 00:18:35.748 }, 00:18:35.748 "secure_channel": true 00:18:35.748 } 00:18:35.748 } 00:18:35.748 ] 00:18:35.748 } 00:18:35.748 ] 00:18:35.748 }' 00:18:35.748 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:36.007 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:36.007 "subsystems": [ 00:18:36.007 { 00:18:36.007 "subsystem": "keyring", 00:18:36.007 "config": [ 00:18:36.007 { 00:18:36.007 "method": "keyring_file_add_key", 00:18:36.007 "params": { 00:18:36.007 "name": "key0", 00:18:36.007 "path": "/tmp/tmp.JzkmDIITwb" 00:18:36.007 } 00:18:36.007 } 00:18:36.007 ] 00:18:36.007 }, 00:18:36.007 { 00:18:36.007 "subsystem": "iobuf", 00:18:36.007 "config": [ 00:18:36.007 { 00:18:36.007 "method": "iobuf_set_options", 00:18:36.007 "params": { 00:18:36.007 "small_pool_count": 8192, 00:18:36.007 "large_pool_count": 1024, 00:18:36.007 "small_bufsize": 8192, 00:18:36.007 "large_bufsize": 135168, 00:18:36.007 "enable_numa": false 00:18:36.007 } 00:18:36.007 } 00:18:36.007 ] 00:18:36.007 }, 00:18:36.007 { 00:18:36.007 "subsystem": "sock", 00:18:36.007 "config": [ 00:18:36.007 { 00:18:36.007 "method": "sock_set_default_impl", 00:18:36.007 "params": { 00:18:36.007 "impl_name": "uring" 00:18:36.007 } 00:18:36.007 }, 00:18:36.007 { 00:18:36.007 "method": "sock_impl_set_options", 00:18:36.007 "params": { 00:18:36.007 "impl_name": "ssl", 00:18:36.007 "recv_buf_size": 4096, 00:18:36.007 "send_buf_size": 4096, 00:18:36.007 "enable_recv_pipe": true, 00:18:36.007 "enable_quickack": false, 00:18:36.007 "enable_placement_id": 0, 00:18:36.007 "enable_zerocopy_send_server": true, 00:18:36.007 "enable_zerocopy_send_client": false, 00:18:36.007 "zerocopy_threshold": 0, 00:18:36.007 "tls_version": 0, 00:18:36.007 "enable_ktls": false 00:18:36.007 } 00:18:36.007 }, 00:18:36.007 { 00:18:36.007 "method": "sock_impl_set_options", 00:18:36.007 "params": { 00:18:36.007 "impl_name": "posix", 00:18:36.007 "recv_buf_size": 2097152, 00:18:36.007 "send_buf_size": 2097152, 00:18:36.007 "enable_recv_pipe": true, 00:18:36.007 "enable_quickack": false, 00:18:36.007 "enable_placement_id": 0, 00:18:36.007 "enable_zerocopy_send_server": true, 00:18:36.007 "enable_zerocopy_send_client": false, 00:18:36.007 "zerocopy_threshold": 0, 00:18:36.007 "tls_version": 0, 00:18:36.007 "enable_ktls": false 00:18:36.007 } 00:18:36.007 }, 00:18:36.007 { 00:18:36.007 "method": "sock_impl_set_options", 00:18:36.007 "params": { 00:18:36.007 "impl_name": "uring", 00:18:36.007 "recv_buf_size": 2097152, 00:18:36.007 "send_buf_size": 2097152, 00:18:36.007 "enable_recv_pipe": true, 00:18:36.007 "enable_quickack": false, 00:18:36.007 "enable_placement_id": 0, 00:18:36.007 "enable_zerocopy_send_server": false, 00:18:36.007 "enable_zerocopy_send_client": false, 00:18:36.007 "zerocopy_threshold": 0, 00:18:36.007 "tls_version": 0, 00:18:36.007 "enable_ktls": false 00:18:36.007 } 00:18:36.007 } 00:18:36.007 ] 00:18:36.007 }, 00:18:36.007 { 00:18:36.007 "subsystem": "vmd", 00:18:36.007 "config": [] 00:18:36.007 }, 00:18:36.007 { 00:18:36.007 "subsystem": "accel", 00:18:36.007 "config": [ 00:18:36.007 { 00:18:36.007 "method": "accel_set_options", 00:18:36.007 "params": { 00:18:36.007 "small_cache_size": 128, 00:18:36.007 "large_cache_size": 16, 00:18:36.007 "task_count": 2048, 00:18:36.007 "sequence_count": 2048, 00:18:36.007 "buf_count": 2048 00:18:36.007 } 00:18:36.007 } 00:18:36.007 ] 00:18:36.007 }, 00:18:36.007 { 00:18:36.007 "subsystem": "bdev", 00:18:36.007 "config": [ 00:18:36.007 { 00:18:36.007 "method": "bdev_set_options", 00:18:36.007 "params": { 00:18:36.007 "bdev_io_pool_size": 65535, 00:18:36.007 "bdev_io_cache_size": 256, 00:18:36.007 "bdev_auto_examine": true, 00:18:36.007 "iobuf_small_cache_size": 128, 00:18:36.007 "iobuf_large_cache_size": 16 00:18:36.007 } 00:18:36.007 }, 00:18:36.007 { 00:18:36.007 "method": "bdev_raid_set_options", 00:18:36.007 "params": { 00:18:36.007 "process_window_size_kb": 1024, 00:18:36.007 "process_max_bandwidth_mb_sec": 0 00:18:36.007 } 00:18:36.007 }, 00:18:36.007 { 00:18:36.007 "method": "bdev_iscsi_set_options", 00:18:36.007 "params": { 00:18:36.007 "timeout_sec": 30 00:18:36.007 } 00:18:36.007 }, 00:18:36.007 { 00:18:36.007 "method": "bdev_nvme_set_options", 00:18:36.007 "params": { 00:18:36.007 "action_on_timeout": "none", 00:18:36.007 "timeout_us": 0, 00:18:36.007 "timeout_admin_us": 0, 00:18:36.007 "keep_alive_timeout_ms": 10000, 00:18:36.007 "arbitration_burst": 0, 00:18:36.007 "low_priority_weight": 0, 00:18:36.007 "medium_priority_weight": 0, 00:18:36.007 "high_priority_weight": 0, 00:18:36.007 "nvme_adminq_poll_period_us": 10000, 00:18:36.007 "nvme_ioq_poll_period_us": 0, 00:18:36.007 "io_queue_requests": 512, 00:18:36.007 "delay_cmd_submit": true, 00:18:36.007 "transport_retry_count": 4, 00:18:36.007 "bdev_retry_count": 3, 00:18:36.007 "transport_ack_timeout": 0, 00:18:36.007 "ctrlr_loss_timeout_sec": 0, 00:18:36.007 "reconnect_delay_sec": 0, 00:18:36.007 "fast_io_fail_timeout_sec": 0, 00:18:36.007 "disable_auto_failback": false, 00:18:36.007 "generate_uuids": false, 00:18:36.007 "transport_tos": 0, 00:18:36.007 "nvme_error_stat": false, 00:18:36.007 "rdma_srq_size": 0, 00:18:36.007 "io_path_stat": false, 00:18:36.007 "allow_accel_sequence": false, 00:18:36.007 "rdma_max_cq_size": 0, 00:18:36.007 "rdma_cm_event_timeout_ms": 0, 00:18:36.007 "dhchap_digests": [ 00:18:36.007 "sha256", 00:18:36.007 "sha384", 00:18:36.007 "sha512" 00:18:36.007 ], 00:18:36.007 "dhchap_dhgroups": [ 00:18:36.007 "null", 00:18:36.007 "ffdhe2048", 00:18:36.008 "ffdhe3072", 00:18:36.008 "ffdhe4096", 00:18:36.008 "ffdhe6144", 00:18:36.008 "ffdhe8192" 00:18:36.008 ] 00:18:36.008 } 00:18:36.008 }, 00:18:36.008 { 00:18:36.008 "method": "bdev_nvme_attach_controller", 00:18:36.008 "params": { 00:18:36.008 "name": "TLSTEST", 00:18:36.008 "trtype": "TCP", 00:18:36.008 "adrfam": "IPv4", 00:18:36.008 "traddr": "10.0.0.3", 00:18:36.008 "trsvcid": "4420", 00:18:36.008 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.008 "prchk_reftag": false, 00:18:36.008 "prchk_guard": false, 00:18:36.008 "ctrlr_loss_timeout_sec": 0, 00:18:36.008 "reconnect_delay_sec": 0, 00:18:36.008 "fast_io_fail_timeout_sec": 0, 00:18:36.008 "psk": "key0", 00:18:36.008 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:36.008 "hdgst": false, 00:18:36.008 "ddgst": false, 00:18:36.008 "multipath": "multipath" 00:18:36.008 } 00:18:36.008 }, 00:18:36.008 { 00:18:36.008 "method": "bdev_nvme_set_hotplug", 00:18:36.008 "params": { 00:18:36.008 "period_us": 100000, 00:18:36.008 "enable": false 00:18:36.008 } 00:18:36.008 }, 00:18:36.008 { 00:18:36.008 "method": "bdev_wait_for_examine" 00:18:36.008 } 00:18:36.008 ] 00:18:36.008 }, 00:18:36.008 { 00:18:36.008 "subsystem": "nbd", 00:18:36.008 "config": [] 00:18:36.008 } 00:18:36.008 ] 00:18:36.008 }' 00:18:36.008 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72363 00:18:36.008 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72363 ']' 00:18:36.008 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72363 00:18:36.008 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:36.008 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.008 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72363 00:18:36.008 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:36.008 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:36.008 killing process with pid 72363 00:18:36.008 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72363' 00:18:36.008 Received shutdown signal, test time was about 10.000000 seconds 00:18:36.008 00:18:36.008 Latency(us) 00:18:36.008 [2024-11-27T06:13:41.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.008 [2024-11-27T06:13:41.105Z] =================================================================================================================== 00:18:36.008 [2024-11-27T06:13:41.105Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:36.008 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72363 00:18:36.008 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72363 00:18:36.267 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72315 00:18:36.267 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72315 ']' 00:18:36.267 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72315 00:18:36.267 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:36.267 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.267 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72315 00:18:36.267 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:36.267 killing process with pid 72315 00:18:36.267 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:36.267 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72315' 00:18:36.267 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72315 00:18:36.267 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72315 00:18:36.526 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:36.526 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:36.526 "subsystems": [ 00:18:36.526 { 00:18:36.526 "subsystem": "keyring", 00:18:36.526 "config": [ 00:18:36.526 { 00:18:36.527 "method": "keyring_file_add_key", 00:18:36.527 "params": { 00:18:36.527 "name": "key0", 00:18:36.527 "path": "/tmp/tmp.JzkmDIITwb" 00:18:36.527 } 00:18:36.527 } 00:18:36.527 ] 00:18:36.527 }, 00:18:36.527 { 00:18:36.527 "subsystem": "iobuf", 00:18:36.527 "config": [ 00:18:36.527 { 00:18:36.527 "method": "iobuf_set_options", 00:18:36.527 "params": { 00:18:36.527 "small_pool_count": 8192, 00:18:36.527 "large_pool_count": 1024, 00:18:36.527 "small_bufsize": 8192, 00:18:36.527 "large_bufsize": 135168, 00:18:36.527 "enable_numa": false 00:18:36.527 } 00:18:36.527 } 00:18:36.527 ] 00:18:36.527 }, 00:18:36.527 { 00:18:36.527 "subsystem": "sock", 00:18:36.527 "config": [ 00:18:36.527 { 00:18:36.527 "method": "sock_set_default_impl", 00:18:36.527 "params": { 00:18:36.527 "impl_name": "uring" 00:18:36.527 } 00:18:36.527 }, 00:18:36.527 { 00:18:36.527 "method": "sock_impl_set_options", 00:18:36.527 "params": { 00:18:36.527 "impl_name": "ssl", 00:18:36.527 "recv_buf_size": 4096, 00:18:36.527 "send_buf_size": 4096, 00:18:36.527 "enable_recv_pipe": true, 00:18:36.527 "enable_quickack": false, 00:18:36.527 "enable_placement_id": 0, 00:18:36.527 "enable_zerocopy_send_server": true, 00:18:36.527 "enable_zerocopy_send_client": false, 00:18:36.527 "zerocopy_threshold": 0, 00:18:36.527 "tls_version": 0, 00:18:36.527 "enable_ktls": false 00:18:36.527 } 00:18:36.527 }, 00:18:36.527 { 00:18:36.527 "method": "sock_impl_set_options", 00:18:36.527 "params": { 00:18:36.527 "impl_name": "posix", 00:18:36.527 "recv_buf_size": 2097152, 00:18:36.527 "send_buf_size": 2097152, 00:18:36.527 "enable_recv_pipe": true, 00:18:36.527 "enable_quickack": false, 00:18:36.527 "enable_placement_id": 0, 00:18:36.527 "enable_zerocopy_send_server": true, 00:18:36.527 "enable_zerocopy_send_client": false, 00:18:36.527 "zerocopy_threshold": 0, 00:18:36.527 "tls_version": 0, 00:18:36.527 "enable_ktls": false 00:18:36.527 } 00:18:36.527 }, 00:18:36.527 { 00:18:36.527 "method": "sock_impl_set_options", 00:18:36.527 "params": { 00:18:36.527 "impl_name": "uring", 00:18:36.527 "recv_buf_size": 2097152, 00:18:36.527 "send_buf_size": 2097152, 00:18:36.527 "enable_recv_pipe": true, 00:18:36.527 "enable_quickack": false, 00:18:36.527 "enable_placement_id": 0, 00:18:36.527 "enable_zerocopy_send_server": false, 00:18:36.527 "enable_zerocopy_send_client": false, 00:18:36.527 "zerocopy_threshold": 0, 00:18:36.527 "tls_version": 0, 00:18:36.527 "enable_ktls": false 00:18:36.527 } 00:18:36.527 } 00:18:36.527 ] 00:18:36.527 }, 00:18:36.527 { 00:18:36.527 "subsystem": "vmd", 00:18:36.527 "config": [] 00:18:36.527 }, 00:18:36.527 { 00:18:36.527 "subsystem": "accel", 00:18:36.527 "config": [ 00:18:36.527 { 00:18:36.527 "method": "accel_set_options", 00:18:36.527 "params": { 00:18:36.527 "small_cache_size": 128, 00:18:36.527 "large_cache_size": 16, 00:18:36.527 "task_count": 2048, 00:18:36.527 "sequence_count": 2048, 00:18:36.527 "buf_count": 2048 00:18:36.527 } 00:18:36.527 } 00:18:36.527 ] 00:18:36.527 }, 00:18:36.527 { 00:18:36.527 "subsystem": "bdev", 00:18:36.527 "config": [ 00:18:36.527 { 00:18:36.527 "method": "bdev_set_options", 00:18:36.527 "params": { 00:18:36.527 "bdev_io_pool_size": 65535, 00:18:36.527 "bdev_io_cache_size": 256, 00:18:36.527 "bdev_auto_examine": true, 00:18:36.527 "iobuf_small_cache_size": 128, 00:18:36.527 "iobuf_large_cache_size": 16 00:18:36.527 } 00:18:36.527 }, 00:18:36.527 { 00:18:36.527 "method": "bdev_raid_set_options", 00:18:36.527 "params": { 00:18:36.527 "process_window_size_kb": 1024, 00:18:36.527 "process_max_bandwidth_mb_sec": 0 00:18:36.527 } 00:18:36.527 }, 00:18:36.527 { 00:18:36.527 "method": "bdev_iscsi_set_options", 00:18:36.527 "params": { 00:18:36.527 "timeout_sec": 30 00:18:36.527 } 00:18:36.527 }, 00:18:36.527 { 00:18:36.527 "method": "bdev_nvme_set_options", 00:18:36.527 "params": { 00:18:36.527 "action_on_timeout": "none", 00:18:36.527 "timeout_us": 0, 00:18:36.527 "timeout_admin_us": 0, 00:18:36.527 "keep_alive_timeout_ms": 10000, 00:18:36.527 "arbitration_burst": 0, 00:18:36.527 "low_priority_weight": 0, 00:18:36.527 "medium_priority_weight": 0, 00:18:36.527 "high_priority_weight": 0, 00:18:36.527 "nvme_adminq_poll_period_us": 10000, 00:18:36.527 "nvme_ioq_poll_period_us": 0, 00:18:36.527 "io_queue_requests": 0, 00:18:36.527 "delay_cmd_submit": true, 00:18:36.527 "transport_retry_count": 4, 00:18:36.527 "bdev_retry_count": 3, 00:18:36.527 "transport_ack_timeout": 0, 00:18:36.527 "ctrlr_loss_timeout_sec": 0, 00:18:36.527 "reconnect_delay_sec": 0, 00:18:36.527 "fast_io_fail_timeout_sec": 0, 00:18:36.527 "disable_auto_failback": false, 00:18:36.527 "generate_uuids": false, 00:18:36.527 "transport_tos": 0, 00:18:36.527 "nvme_error_stat": false, 00:18:36.527 "rdma_srq_size": 0, 00:18:36.527 "io_path_stat": false, 00:18:36.527 "allow_accel_sequence": false, 00:18:36.527 "rdma_max_cq_size": 0, 00:18:36.527 "rdma_cm_event_timeout_ms": 0, 00:18:36.527 "dhchap_digests": [ 00:18:36.527 "sha256", 00:18:36.527 "sha384", 00:18:36.527 "sha512" 00:18:36.527 ], 00:18:36.527 "dhchap_dhgroups": [ 00:18:36.527 "null", 00:18:36.527 "ffdhe2048", 00:18:36.527 "ffdhe3072", 00:18:36.527 "ffdhe4096", 00:18:36.527 "ffdhe6144", 00:18:36.527 "ffdhe8192" 00:18:36.527 ] 00:18:36.527 } 00:18:36.527 }, 00:18:36.527 { 00:18:36.527 "method": "bdev_nvme_set_hotplug", 00:18:36.527 "params": { 00:18:36.527 "period_us": 100000, 00:18:36.527 "enable": false 00:18:36.527 } 00:18:36.527 }, 00:18:36.527 { 00:18:36.527 "method": "bdev_malloc_create", 00:18:36.527 "params": { 00:18:36.528 "name": "malloc0", 00:18:36.528 "num_blocks": 8192, 00:18:36.528 "block_size": 4096, 00:18:36.528 "physical_block_size": 4096, 00:18:36.528 "uuid": "e10ce317-cc6e-4cb8-af83-56cddee95a15", 00:18:36.528 "optimal_io_boundary": 0, 00:18:36.528 "md_size": 0, 00:18:36.528 "dif_type": 0, 00:18:36.528 "dif_is_head_of_md": false, 00:18:36.528 "dif_pi_format": 0 00:18:36.528 } 00:18:36.528 }, 00:18:36.528 { 00:18:36.528 "method": "bdev_wait_for_examine" 00:18:36.528 } 00:18:36.528 ] 00:18:36.528 }, 00:18:36.528 { 00:18:36.528 "subsystem": "nbd", 00:18:36.528 "config": [] 00:18:36.528 }, 00:18:36.528 { 00:18:36.528 "subsystem": "scheduler", 00:18:36.528 "config": [ 00:18:36.528 { 00:18:36.528 "method": "framework_set_scheduler", 00:18:36.528 "params": { 00:18:36.528 "name": "static" 00:18:36.528 } 00:18:36.528 } 00:18:36.528 ] 00:18:36.528 }, 00:18:36.528 { 00:18:36.528 "subsystem": "nvmf", 00:18:36.528 "config": [ 00:18:36.528 { 00:18:36.528 "method": "nvmf_set_config", 00:18:36.528 "params": { 00:18:36.528 "discovery_filter": "match_any", 00:18:36.528 "admin_cmd_passthru": { 00:18:36.528 "identify_ctrlr": false 00:18:36.528 }, 00:18:36.528 "dhchap_digests": [ 00:18:36.528 "sha256", 00:18:36.528 "sha384", 00:18:36.528 "sha512" 00:18:36.528 ], 00:18:36.528 "dhchap_dhgroups": [ 00:18:36.528 "null", 00:18:36.528 "ffdhe2048", 00:18:36.528 "ffdhe3072", 00:18:36.528 "ffdhe4096", 00:18:36.528 "ffdhe6144", 00:18:36.528 "ffdhe8192" 00:18:36.528 ] 00:18:36.528 } 00:18:36.528 }, 00:18:36.528 { 00:18:36.528 "method": "nvmf_set_max_subsystems", 00:18:36.528 "params": { 00:18:36.528 "max_subsystems": 1024 00:18:36.528 } 00:18:36.528 }, 00:18:36.528 { 00:18:36.528 "method": "nvmf_set_crdt", 00:18:36.528 "params": { 00:18:36.528 "crdt1": 0, 00:18:36.528 "crdt2": 0, 00:18:36.528 "crdt3": 0 00:18:36.528 } 00:18:36.528 }, 00:18:36.528 { 00:18:36.528 "method": "nvmf_create_transport", 00:18:36.528 "params": { 00:18:36.528 "trtype": "TCP", 00:18:36.528 "max_queue_depth": 128, 00:18:36.528 "max_io_qpairs_per_ctrlr": 127, 00:18:36.528 "in_capsule_data_size": 4096, 00:18:36.528 "max_io_size": 131072, 00:18:36.528 "io_unit_size": 131072, 00:18:36.528 "max_aq_depth": 128, 00:18:36.528 "num_shared_buffers": 511, 00:18:36.528 "buf_cache_size": 4294967295, 00:18:36.528 "dif_insert_or_strip": false, 00:18:36.528 "zcopy": false, 00:18:36.528 "c2h_success": false, 00:18:36.528 "sock_priority": 0, 00:18:36.528 "abort_timeout_sec": 1, 00:18:36.528 "ack_timeout": 0, 00:18:36.528 "data_wr_pool_size": 0 00:18:36.528 } 00:18:36.528 }, 00:18:36.528 { 00:18:36.528 "method": "nvmf_create_subsystem", 00:18:36.528 "params": { 00:18:36.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.528 "allow_any_host": false, 00:18:36.528 "serial_number": "SPDK00000000000001", 00:18:36.528 "model_number": "SPDK bdev Controller", 00:18:36.528 "max_namespaces": 10, 00:18:36.528 "min_cntlid": 1, 00:18:36.528 "max_cntlid": 65519, 00:18:36.528 "ana_reporting": false 00:18:36.528 } 00:18:36.528 }, 00:18:36.528 { 00:18:36.528 "method": "nvmf_subsystem_add_host", 00:18:36.528 "params": { 00:18:36.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.528 "host": "nqn.2016-06.io.spdk:host1", 00:18:36.528 "psk": "key0" 00:18:36.528 } 00:18:36.528 }, 00:18:36.528 { 00:18:36.528 "method": "nvmf_subsystem_add_ns", 00:18:36.528 "params": { 00:18:36.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.528 "namespace": { 00:18:36.528 "nsid": 1, 00:18:36.528 "bdev_name": "malloc0", 00:18:36.528 "nguid": "E10CE317CC6E4CB8AF8356CDDEE95A15", 00:18:36.528 "uuid": "e10ce317-cc6e-4cb8-af83-56cddee95a15", 00:18:36.528 "no_auto_visible": false 00:18:36.528 } 00:18:36.528 } 00:18:36.528 }, 00:18:36.528 { 00:18:36.528 "method": "nvmf_subsystem_add_listener", 00:18:36.528 "params": { 00:18:36.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.528 "listen_address": { 00:18:36.528 "trtype": "TCP", 00:18:36.528 "adrfam": "IPv4", 00:18:36.528 "traddr": "10.0.0.3", 00:18:36.528 "trsvcid": "4420" 00:18:36.528 }, 00:18:36.528 "secure_channel": true 00:18:36.528 } 00:18:36.528 } 00:18:36.528 ] 00:18:36.528 } 00:18:36.528 ] 00:18:36.528 }' 00:18:36.528 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:36.528 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:36.528 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.528 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72413 00:18:36.528 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:36.528 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72413 00:18:36.528 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72413 ']' 00:18:36.528 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.528 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.528 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.528 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.528 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.528 [2024-11-27 06:13:41.548716] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:18:36.528 [2024-11-27 06:13:41.548829] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.787 [2024-11-27 06:13:41.687079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.787 [2024-11-27 06:13:41.739176] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.787 [2024-11-27 06:13:41.739241] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.787 [2024-11-27 06:13:41.739268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.787 [2024-11-27 06:13:41.739276] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.787 [2024-11-27 06:13:41.739282] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.787 [2024-11-27 06:13:41.739691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.046 [2024-11-27 06:13:41.906542] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:37.046 [2024-11-27 06:13:41.985403] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.046 [2024-11-27 06:13:42.017348] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:37.046 [2024-11-27 06:13:42.017567] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:37.614 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.614 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:37.614 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:37.614 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:37.614 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.614 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.614 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72445 00:18:37.614 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72445 /var/tmp/bdevperf.sock 00:18:37.614 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72445 ']' 00:18:37.614 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.614 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.614 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.614 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.614 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.614 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:37.614 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:37.614 "subsystems": [ 00:18:37.614 { 00:18:37.614 "subsystem": "keyring", 00:18:37.614 "config": [ 00:18:37.614 { 00:18:37.614 "method": "keyring_file_add_key", 00:18:37.614 "params": { 00:18:37.614 "name": "key0", 00:18:37.614 "path": "/tmp/tmp.JzkmDIITwb" 00:18:37.614 } 00:18:37.614 } 00:18:37.614 ] 00:18:37.614 }, 00:18:37.614 { 00:18:37.614 "subsystem": "iobuf", 00:18:37.614 "config": [ 00:18:37.614 { 00:18:37.614 "method": "iobuf_set_options", 00:18:37.614 "params": { 00:18:37.614 "small_pool_count": 8192, 00:18:37.614 "large_pool_count": 1024, 00:18:37.614 "small_bufsize": 8192, 00:18:37.614 "large_bufsize": 135168, 00:18:37.614 "enable_numa": false 00:18:37.614 } 00:18:37.614 } 00:18:37.614 ] 00:18:37.614 }, 00:18:37.614 { 00:18:37.614 "subsystem": "sock", 00:18:37.614 "config": [ 00:18:37.614 { 00:18:37.614 "method": "sock_set_default_impl", 00:18:37.614 "params": { 00:18:37.614 "impl_name": "uring" 00:18:37.614 } 00:18:37.614 }, 00:18:37.615 { 00:18:37.615 "method": "sock_impl_set_options", 00:18:37.615 "params": { 00:18:37.615 "impl_name": "ssl", 00:18:37.615 "recv_buf_size": 4096, 00:18:37.615 "send_buf_size": 4096, 00:18:37.615 "enable_recv_pipe": true, 00:18:37.615 "enable_quickack": false, 00:18:37.615 "enable_placement_id": 0, 00:18:37.615 "enable_zerocopy_send_server": true, 00:18:37.615 "enable_zerocopy_send_client": false, 00:18:37.615 "zerocopy_threshold": 0, 00:18:37.615 "tls_version": 0, 00:18:37.615 "enable_ktls": false 00:18:37.615 } 00:18:37.615 }, 00:18:37.615 { 00:18:37.615 "method": "sock_impl_set_options", 00:18:37.615 "params": { 00:18:37.615 "impl_name": "posix", 00:18:37.615 "recv_buf_size": 2097152, 00:18:37.615 "send_buf_size": 2097152, 00:18:37.615 "enable_recv_pipe": true, 00:18:37.615 "enable_quickack": false, 00:18:37.615 "enable_placement_id": 0, 00:18:37.615 "enable_zerocopy_send_server": true, 00:18:37.615 "enable_zerocopy_send_client": false, 00:18:37.615 "zerocopy_threshold": 0, 00:18:37.615 "tls_version": 0, 00:18:37.615 "enable_ktls": false 00:18:37.615 } 00:18:37.615 }, 00:18:37.615 { 00:18:37.615 "method": "sock_impl_set_options", 00:18:37.615 "params": { 00:18:37.615 "impl_name": "uring", 00:18:37.615 "recv_buf_size": 2097152, 00:18:37.615 "send_buf_size": 2097152, 00:18:37.615 "enable_recv_pipe": true, 00:18:37.615 "enable_quickack": false, 00:18:37.615 "enable_placement_id": 0, 00:18:37.615 "enable_zerocopy_send_server": false, 00:18:37.615 "enable_zerocopy_send_client": false, 00:18:37.615 "zerocopy_threshold": 0, 00:18:37.615 "tls_version": 0, 00:18:37.615 "enable_ktls": false 00:18:37.615 } 00:18:37.615 } 00:18:37.615 ] 00:18:37.615 }, 00:18:37.615 { 00:18:37.615 "subsystem": "vmd", 00:18:37.615 "config": [] 00:18:37.615 }, 00:18:37.615 { 00:18:37.615 "subsystem": "accel", 00:18:37.615 "config": [ 00:18:37.615 { 00:18:37.615 "method": "accel_set_options", 00:18:37.615 "params": { 00:18:37.615 "small_cache_size": 128, 00:18:37.615 "large_cache_size": 16, 00:18:37.615 "task_count": 2048, 00:18:37.615 "sequence_count": 2048, 00:18:37.615 "buf_count": 2048 00:18:37.615 } 00:18:37.615 } 00:18:37.615 ] 00:18:37.615 }, 00:18:37.615 { 00:18:37.615 "subsystem": "bdev", 00:18:37.615 "config": [ 00:18:37.615 { 00:18:37.615 "method": "bdev_set_options", 00:18:37.615 "params": { 00:18:37.615 "bdev_io_pool_size": 65535, 00:18:37.615 "bdev_io_cache_size": 256, 00:18:37.615 "bdev_auto_examine": true, 00:18:37.615 "iobuf_small_cache_size": 128, 00:18:37.615 "iobuf_large_cache_size": 16 00:18:37.615 } 00:18:37.615 }, 00:18:37.615 { 00:18:37.615 "method": "bdev_raid_set_options", 00:18:37.615 "params": { 00:18:37.615 "process_window_size_kb": 1024, 00:18:37.615 "process_max_bandwidth_mb_sec": 0 00:18:37.615 } 00:18:37.615 }, 00:18:37.615 { 00:18:37.615 "method": "bdev_iscsi_set_options", 00:18:37.615 "params": { 00:18:37.615 "timeout_sec": 30 00:18:37.615 } 00:18:37.615 }, 00:18:37.615 { 00:18:37.615 "method": "bdev_nvme_set_options", 00:18:37.615 "params": { 00:18:37.615 "action_on_timeout": "none", 00:18:37.615 "timeout_us": 0, 00:18:37.615 "timeout_admin_us": 0, 00:18:37.615 "keep_alive_timeout_ms": 10000, 00:18:37.615 "arbitration_burst": 0, 00:18:37.615 "low_priority_weight": 0, 00:18:37.615 "medium_priority_weight": 0, 00:18:37.615 "high_priority_weight": 0, 00:18:37.615 "nvme_adminq_poll_period_us": 10000, 00:18:37.615 "nvme_ioq_poll_period_us": 0, 00:18:37.615 "io_queue_requests": 512, 00:18:37.615 "delay_cmd_submit": true, 00:18:37.615 "transport_retry_count": 4, 00:18:37.615 "bdev_retry_count": 3, 00:18:37.615 "transport_ack_timeout": 0, 00:18:37.615 "ctrlr_loss_timeout_sec": 0, 00:18:37.615 "reconnect_delay_sec": 0, 00:18:37.615 "fast_io_fail_timeout_sec": 0, 00:18:37.615 "disable_auto_failback": false, 00:18:37.615 "generate_uuids": false, 00:18:37.615 "transport_tos": 0, 00:18:37.615 "nvme_error_stat": false, 00:18:37.615 "rdma_srq_size": 0, 00:18:37.615 "io_path_stat": false, 00:18:37.615 "allow_accel_sequence": false, 00:18:37.615 "rdma_max_cq_size": 0, 00:18:37.615 "rdma_cm_event_timeout_ms": 0, 00:18:37.615 "dhchap_digests": [ 00:18:37.615 "sha256", 00:18:37.615 "sha384", 00:18:37.615 "sha512" 00:18:37.615 ], 00:18:37.615 "dhchap_dhgroups": [ 00:18:37.615 "null", 00:18:37.615 "ffdhe2048", 00:18:37.615 "ffdhe3072", 00:18:37.615 "ffdhe4096", 00:18:37.615 "ffdhe6144", 00:18:37.615 "ffdhe8192" 00:18:37.615 ] 00:18:37.615 } 00:18:37.615 }, 00:18:37.615 { 00:18:37.615 "method": "bdev_nvme_attach_controller", 00:18:37.615 "params": { 00:18:37.615 "name": "TLSTEST", 00:18:37.615 "trtype": "TCP", 00:18:37.615 "adrfam": "IPv4", 00:18:37.615 "traddr": "10.0.0.3", 00:18:37.615 "trsvcid": "4420", 00:18:37.615 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.615 "prchk_reftag": false, 00:18:37.615 "prchk_guard": false, 00:18:37.615 "ctrlr_loss_timeout_sec": 0, 00:18:37.615 "reconnect_delay_sec": 0, 00:18:37.615 "fast_io_fail_timeout_sec": 0, 00:18:37.615 "psk": "key0", 00:18:37.615 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:37.615 "hdgst": false, 00:18:37.615 "ddgst": false, 00:18:37.615 "multipath": "multipath" 00:18:37.615 } 00:18:37.615 }, 00:18:37.615 { 00:18:37.615 "method": "bdev_nvme_set_hotplug", 00:18:37.615 "params": { 00:18:37.615 "period_us": 100000, 00:18:37.615 "enable": false 00:18:37.615 } 00:18:37.615 }, 00:18:37.615 { 00:18:37.615 "method": "bdev_wait_for_examine" 00:18:37.615 } 00:18:37.615 ] 00:18:37.615 }, 00:18:37.615 { 00:18:37.615 "subsystem": "nbd", 00:18:37.615 "config": [] 00:18:37.615 } 00:18:37.615 ] 00:18:37.615 }' 00:18:37.615 [2024-11-27 06:13:42.661558] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:18:37.615 [2024-11-27 06:13:42.661670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72445 ] 00:18:37.873 [2024-11-27 06:13:42.818720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.873 [2024-11-27 06:13:42.881652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.131 [2024-11-27 06:13:43.037971] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:38.131 [2024-11-27 06:13:43.101085] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.698 06:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.698 06:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:38.698 06:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:38.957 Running I/O for 10 seconds... 00:18:40.832 3828.00 IOPS, 14.95 MiB/s [2024-11-27T06:13:46.865Z] 3826.50 IOPS, 14.95 MiB/s [2024-11-27T06:13:48.247Z] 3796.00 IOPS, 14.83 MiB/s [2024-11-27T06:13:49.183Z] 3822.25 IOPS, 14.93 MiB/s [2024-11-27T06:13:50.119Z] 3848.40 IOPS, 15.03 MiB/s [2024-11-27T06:13:51.053Z] 3864.17 IOPS, 15.09 MiB/s [2024-11-27T06:13:51.987Z] 3870.43 IOPS, 15.12 MiB/s [2024-11-27T06:13:52.924Z] 3882.25 IOPS, 15.17 MiB/s [2024-11-27T06:13:53.860Z] 3880.78 IOPS, 15.16 MiB/s [2024-11-27T06:13:54.118Z] 3868.20 IOPS, 15.11 MiB/s 00:18:49.021 Latency(us) 00:18:49.021 [2024-11-27T06:13:54.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.021 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:49.021 Verification LBA range: start 0x0 length 0x2000 00:18:49.021 TLSTESTn1 : 10.02 3874.96 15.14 0.00 0.00 32981.00 4230.05 29550.78 00:18:49.021 [2024-11-27T06:13:54.118Z] =================================================================================================================== 00:18:49.021 [2024-11-27T06:13:54.118Z] Total : 3874.96 15.14 0.00 0.00 32981.00 4230.05 29550.78 00:18:49.021 { 00:18:49.021 "results": [ 00:18:49.021 { 00:18:49.021 "job": "TLSTESTn1", 00:18:49.021 "core_mask": "0x4", 00:18:49.021 "workload": "verify", 00:18:49.021 "status": "finished", 00:18:49.021 "verify_range": { 00:18:49.021 "start": 0, 00:18:49.021 "length": 8192 00:18:49.021 }, 00:18:49.021 "queue_depth": 128, 00:18:49.021 "io_size": 4096, 00:18:49.021 "runtime": 10.015321, 00:18:49.021 "iops": 3874.963168928884, 00:18:49.021 "mibps": 15.136574878628453, 00:18:49.021 "io_failed": 0, 00:18:49.021 "io_timeout": 0, 00:18:49.021 "avg_latency_us": 32980.995953047444, 00:18:49.021 "min_latency_us": 4230.050909090909, 00:18:49.021 "max_latency_us": 29550.778181818183 00:18:49.021 } 00:18:49.021 ], 00:18:49.021 "core_count": 1 00:18:49.021 } 00:18:49.021 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:49.021 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72445 00:18:49.021 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72445 ']' 00:18:49.021 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72445 00:18:49.021 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:49.021 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.021 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72445 00:18:49.021 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:49.021 killing process with pid 72445 00:18:49.021 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:49.021 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72445' 00:18:49.021 Received shutdown signal, test time was about 10.000000 seconds 00:18:49.021 00:18:49.021 Latency(us) 00:18:49.021 [2024-11-27T06:13:54.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.021 [2024-11-27T06:13:54.118Z] =================================================================================================================== 00:18:49.021 [2024-11-27T06:13:54.118Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:49.021 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72445 00:18:49.021 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72445 00:18:49.280 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72413 00:18:49.280 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72413 ']' 00:18:49.280 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72413 00:18:49.280 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:49.280 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.280 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72413 00:18:49.280 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:49.280 killing process with pid 72413 00:18:49.280 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:49.280 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72413' 00:18:49.280 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72413 00:18:49.280 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72413 00:18:49.539 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:49.539 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:49.539 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:49.539 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.539 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72583 00:18:49.539 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:49.539 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72583 00:18:49.539 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72583 ']' 00:18:49.539 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.539 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.539 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.539 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.539 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.539 [2024-11-27 06:13:54.533473] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:18:49.539 [2024-11-27 06:13:54.533577] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.798 [2024-11-27 06:13:54.680217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.798 [2024-11-27 06:13:54.729815] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.798 [2024-11-27 06:13:54.729875] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.798 [2024-11-27 06:13:54.729902] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.798 [2024-11-27 06:13:54.729909] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.798 [2024-11-27 06:13:54.729916] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.798 [2024-11-27 06:13:54.730363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.798 [2024-11-27 06:13:54.783449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:49.798 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.798 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:49.798 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:49.798 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:49.798 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.057 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.057 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.JzkmDIITwb 00:18:50.057 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JzkmDIITwb 00:18:50.057 06:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:50.315 [2024-11-27 06:13:55.186208] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.315 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:50.573 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:50.832 [2024-11-27 06:13:55.778353] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:50.832 [2024-11-27 06:13:55.778633] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:50.832 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:51.090 malloc0 00:18:51.090 06:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:51.348 06:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JzkmDIITwb 00:18:51.607 06:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:51.891 06:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72631 00:18:51.891 06:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:51.891 06:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:51.891 06:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72631 /var/tmp/bdevperf.sock 00:18:51.891 06:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72631 ']' 00:18:51.891 06:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.892 06:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.892 06:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.892 06:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.892 06:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.892 [2024-11-27 06:13:56.852391] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:18:51.892 [2024-11-27 06:13:56.852492] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72631 ] 00:18:52.150 [2024-11-27 06:13:57.006165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.150 [2024-11-27 06:13:57.071945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.150 [2024-11-27 06:13:57.131495] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:53.084 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.084 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:53.084 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JzkmDIITwb 00:18:53.084 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:53.342 [2024-11-27 06:13:58.333375] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:53.342 nvme0n1 00:18:53.342 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:53.600 Running I/O for 1 seconds... 00:18:54.548 4553.00 IOPS, 17.79 MiB/s 00:18:54.548 Latency(us) 00:18:54.548 [2024-11-27T06:13:59.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.548 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:54.548 Verification LBA range: start 0x0 length 0x2000 00:18:54.548 nvme0n1 : 1.02 4579.89 17.89 0.00 0.00 27603.27 6583.39 18111.77 00:18:54.548 [2024-11-27T06:13:59.645Z] =================================================================================================================== 00:18:54.548 [2024-11-27T06:13:59.645Z] Total : 4579.89 17.89 0.00 0.00 27603.27 6583.39 18111.77 00:18:54.548 { 00:18:54.548 "results": [ 00:18:54.548 { 00:18:54.548 "job": "nvme0n1", 00:18:54.548 "core_mask": "0x2", 00:18:54.548 "workload": "verify", 00:18:54.548 "status": "finished", 00:18:54.548 "verify_range": { 00:18:54.548 "start": 0, 00:18:54.548 "length": 8192 00:18:54.548 }, 00:18:54.548 "queue_depth": 128, 00:18:54.548 "io_size": 4096, 00:18:54.548 "runtime": 1.022077, 00:18:54.548 "iops": 4579.889773471079, 00:18:54.548 "mibps": 17.890194427621402, 00:18:54.548 "io_failed": 0, 00:18:54.548 "io_timeout": 0, 00:18:54.548 "avg_latency_us": 27603.27420325882, 00:18:54.548 "min_latency_us": 6583.389090909091, 00:18:54.548 "max_latency_us": 18111.767272727273 00:18:54.548 } 00:18:54.548 ], 00:18:54.548 "core_count": 1 00:18:54.548 } 00:18:54.548 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72631 00:18:54.548 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72631 ']' 00:18:54.548 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72631 00:18:54.548 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:54.548 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.548 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72631 00:18:54.548 killing process with pid 72631 00:18:54.548 Received shutdown signal, test time was about 1.000000 seconds 00:18:54.548 00:18:54.548 Latency(us) 00:18:54.548 [2024-11-27T06:13:59.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.548 [2024-11-27T06:13:59.645Z] =================================================================================================================== 00:18:54.548 [2024-11-27T06:13:59.645Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:54.548 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:54.548 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:54.548 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72631' 00:18:54.548 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72631 00:18:54.548 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72631 00:18:54.808 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72583 00:18:54.808 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72583 ']' 00:18:54.808 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72583 00:18:54.808 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:54.808 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.808 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72583 00:18:54.808 killing process with pid 72583 00:18:54.808 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:54.808 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:54.808 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72583' 00:18:54.808 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72583 00:18:54.808 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72583 00:18:55.067 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:55.067 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:55.067 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:55.067 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.067 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72688 00:18:55.067 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:55.067 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72688 00:18:55.067 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72688 ']' 00:18:55.067 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.067 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.067 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.067 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.067 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.067 [2024-11-27 06:14:00.111542] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:18:55.067 [2024-11-27 06:14:00.111660] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.325 [2024-11-27 06:14:00.258031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.325 [2024-11-27 06:14:00.297165] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.325 [2024-11-27 06:14:00.297210] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.325 [2024-11-27 06:14:00.297220] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.325 [2024-11-27 06:14:00.297228] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.325 [2024-11-27 06:14:00.297234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.325 [2024-11-27 06:14:00.297564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.325 [2024-11-27 06:14:00.349095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:55.325 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:55.325 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:55.325 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:55.325 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:55.325 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.584 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.584 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:55.584 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.584 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.584 [2024-11-27 06:14:00.467230] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.584 malloc0 00:18:55.584 [2024-11-27 06:14:00.497946] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:55.584 [2024-11-27 06:14:00.498126] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:55.584 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.584 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72707 00:18:55.584 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:55.584 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72707 /var/tmp/bdevperf.sock 00:18:55.584 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72707 ']' 00:18:55.584 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:55.584 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:55.584 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:55.584 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.584 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.584 [2024-11-27 06:14:00.574447] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:18:55.584 [2024-11-27 06:14:00.574508] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72707 ] 00:18:55.842 [2024-11-27 06:14:00.721969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.842 [2024-11-27 06:14:00.773474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.842 [2024-11-27 06:14:00.830075] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:55.842 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:55.842 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:55.842 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JzkmDIITwb 00:18:56.100 06:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:56.358 [2024-11-27 06:14:01.402162] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:56.616 nvme0n1 00:18:56.616 06:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:56.616 Running I/O for 1 seconds... 00:18:57.550 4291.00 IOPS, 16.76 MiB/s 00:18:57.550 Latency(us) 00:18:57.550 [2024-11-27T06:14:02.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.550 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:57.550 Verification LBA range: start 0x0 length 0x2000 00:18:57.550 nvme0n1 : 1.02 4350.16 16.99 0.00 0.00 29176.70 5332.25 23712.12 00:18:57.550 [2024-11-27T06:14:02.647Z] =================================================================================================================== 00:18:57.550 [2024-11-27T06:14:02.647Z] Total : 4350.16 16.99 0.00 0.00 29176.70 5332.25 23712.12 00:18:57.550 { 00:18:57.550 "results": [ 00:18:57.550 { 00:18:57.550 "job": "nvme0n1", 00:18:57.550 "core_mask": "0x2", 00:18:57.550 "workload": "verify", 00:18:57.550 "status": "finished", 00:18:57.550 "verify_range": { 00:18:57.550 "start": 0, 00:18:57.550 "length": 8192 00:18:57.550 }, 00:18:57.550 "queue_depth": 128, 00:18:57.550 "io_size": 4096, 00:18:57.550 "runtime": 1.015825, 00:18:57.550 "iops": 4350.158737971599, 00:18:57.550 "mibps": 16.99280757020156, 00:18:57.550 "io_failed": 0, 00:18:57.550 "io_timeout": 0, 00:18:57.550 "avg_latency_us": 29176.702296282583, 00:18:57.550 "min_latency_us": 5332.2472727272725, 00:18:57.550 "max_latency_us": 23712.116363636364 00:18:57.550 } 00:18:57.550 ], 00:18:57.550 "core_count": 1 00:18:57.550 } 00:18:57.550 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:57.550 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.550 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.809 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.809 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:57.809 "subsystems": [ 00:18:57.809 { 00:18:57.809 "subsystem": "keyring", 00:18:57.809 "config": [ 00:18:57.809 { 00:18:57.809 "method": "keyring_file_add_key", 00:18:57.809 "params": { 00:18:57.809 "name": "key0", 00:18:57.809 "path": "/tmp/tmp.JzkmDIITwb" 00:18:57.809 } 00:18:57.809 } 00:18:57.809 ] 00:18:57.809 }, 00:18:57.809 { 00:18:57.809 "subsystem": "iobuf", 00:18:57.809 "config": [ 00:18:57.809 { 00:18:57.809 "method": "iobuf_set_options", 00:18:57.809 "params": { 00:18:57.809 "small_pool_count": 8192, 00:18:57.809 "large_pool_count": 1024, 00:18:57.809 "small_bufsize": 8192, 00:18:57.809 "large_bufsize": 135168, 00:18:57.809 "enable_numa": false 00:18:57.809 } 00:18:57.809 } 00:18:57.809 ] 00:18:57.809 }, 00:18:57.809 { 00:18:57.809 "subsystem": "sock", 00:18:57.809 "config": [ 00:18:57.809 { 00:18:57.809 "method": "sock_set_default_impl", 00:18:57.809 "params": { 00:18:57.809 "impl_name": "uring" 00:18:57.809 } 00:18:57.809 }, 00:18:57.809 { 00:18:57.809 "method": "sock_impl_set_options", 00:18:57.809 "params": { 00:18:57.809 "impl_name": "ssl", 00:18:57.809 "recv_buf_size": 4096, 00:18:57.809 "send_buf_size": 4096, 00:18:57.809 "enable_recv_pipe": true, 00:18:57.809 "enable_quickack": false, 00:18:57.809 "enable_placement_id": 0, 00:18:57.809 "enable_zerocopy_send_server": true, 00:18:57.809 "enable_zerocopy_send_client": false, 00:18:57.809 "zerocopy_threshold": 0, 00:18:57.809 "tls_version": 0, 00:18:57.809 "enable_ktls": false 00:18:57.809 } 00:18:57.809 }, 00:18:57.809 { 00:18:57.809 "method": "sock_impl_set_options", 00:18:57.809 "params": { 00:18:57.809 "impl_name": "posix", 00:18:57.809 "recv_buf_size": 2097152, 00:18:57.809 "send_buf_size": 2097152, 00:18:57.809 "enable_recv_pipe": true, 00:18:57.809 "enable_quickack": false, 00:18:57.809 "enable_placement_id": 0, 00:18:57.809 "enable_zerocopy_send_server": true, 00:18:57.809 "enable_zerocopy_send_client": false, 00:18:57.809 "zerocopy_threshold": 0, 00:18:57.809 "tls_version": 0, 00:18:57.809 "enable_ktls": false 00:18:57.809 } 00:18:57.809 }, 00:18:57.809 { 00:18:57.809 "method": "sock_impl_set_options", 00:18:57.809 "params": { 00:18:57.809 "impl_name": "uring", 00:18:57.809 "recv_buf_size": 2097152, 00:18:57.809 "send_buf_size": 2097152, 00:18:57.809 "enable_recv_pipe": true, 00:18:57.809 "enable_quickack": false, 00:18:57.809 "enable_placement_id": 0, 00:18:57.809 "enable_zerocopy_send_server": false, 00:18:57.809 "enable_zerocopy_send_client": false, 00:18:57.809 "zerocopy_threshold": 0, 00:18:57.810 "tls_version": 0, 00:18:57.810 "enable_ktls": false 00:18:57.810 } 00:18:57.810 } 00:18:57.810 ] 00:18:57.810 }, 00:18:57.810 { 00:18:57.810 "subsystem": "vmd", 00:18:57.810 "config": [] 00:18:57.810 }, 00:18:57.810 { 00:18:57.810 "subsystem": "accel", 00:18:57.810 "config": [ 00:18:57.810 { 00:18:57.810 "method": "accel_set_options", 00:18:57.810 "params": { 00:18:57.810 "small_cache_size": 128, 00:18:57.810 "large_cache_size": 16, 00:18:57.810 "task_count": 2048, 00:18:57.810 "sequence_count": 2048, 00:18:57.810 "buf_count": 2048 00:18:57.810 } 00:18:57.810 } 00:18:57.810 ] 00:18:57.810 }, 00:18:57.810 { 00:18:57.810 "subsystem": "bdev", 00:18:57.810 "config": [ 00:18:57.810 { 00:18:57.810 "method": "bdev_set_options", 00:18:57.810 "params": { 00:18:57.810 "bdev_io_pool_size": 65535, 00:18:57.810 "bdev_io_cache_size": 256, 00:18:57.810 "bdev_auto_examine": true, 00:18:57.810 "iobuf_small_cache_size": 128, 00:18:57.810 "iobuf_large_cache_size": 16 00:18:57.810 } 00:18:57.810 }, 00:18:57.810 { 00:18:57.810 "method": "bdev_raid_set_options", 00:18:57.810 "params": { 00:18:57.810 "process_window_size_kb": 1024, 00:18:57.810 "process_max_bandwidth_mb_sec": 0 00:18:57.810 } 00:18:57.810 }, 00:18:57.810 { 00:18:57.810 "method": "bdev_iscsi_set_options", 00:18:57.810 "params": { 00:18:57.810 "timeout_sec": 30 00:18:57.810 } 00:18:57.810 }, 00:18:57.810 { 00:18:57.810 "method": "bdev_nvme_set_options", 00:18:57.810 "params": { 00:18:57.810 "action_on_timeout": "none", 00:18:57.810 "timeout_us": 0, 00:18:57.810 "timeout_admin_us": 0, 00:18:57.810 "keep_alive_timeout_ms": 10000, 00:18:57.810 "arbitration_burst": 0, 00:18:57.810 "low_priority_weight": 0, 00:18:57.810 "medium_priority_weight": 0, 00:18:57.810 "high_priority_weight": 0, 00:18:57.810 "nvme_adminq_poll_period_us": 10000, 00:18:57.810 "nvme_ioq_poll_period_us": 0, 00:18:57.810 "io_queue_requests": 0, 00:18:57.810 "delay_cmd_submit": true, 00:18:57.810 "transport_retry_count": 4, 00:18:57.810 "bdev_retry_count": 3, 00:18:57.810 "transport_ack_timeout": 0, 00:18:57.810 "ctrlr_loss_timeout_sec": 0, 00:18:57.810 "reconnect_delay_sec": 0, 00:18:57.810 "fast_io_fail_timeout_sec": 0, 00:18:57.810 "disable_auto_failback": false, 00:18:57.810 "generate_uuids": false, 00:18:57.810 "transport_tos": 0, 00:18:57.810 "nvme_error_stat": false, 00:18:57.810 "rdma_srq_size": 0, 00:18:57.810 "io_path_stat": false, 00:18:57.810 "allow_accel_sequence": false, 00:18:57.810 "rdma_max_cq_size": 0, 00:18:57.810 "rdma_cm_event_timeout_ms": 0, 00:18:57.810 "dhchap_digests": [ 00:18:57.810 "sha256", 00:18:57.810 "sha384", 00:18:57.810 "sha512" 00:18:57.810 ], 00:18:57.810 "dhchap_dhgroups": [ 00:18:57.810 "null", 00:18:57.810 "ffdhe2048", 00:18:57.810 "ffdhe3072", 00:18:57.810 "ffdhe4096", 00:18:57.810 "ffdhe6144", 00:18:57.810 "ffdhe8192" 00:18:57.810 ] 00:18:57.810 } 00:18:57.810 }, 00:18:57.810 { 00:18:57.810 "method": "bdev_nvme_set_hotplug", 00:18:57.810 "params": { 00:18:57.810 "period_us": 100000, 00:18:57.810 "enable": false 00:18:57.810 } 00:18:57.810 }, 00:18:57.810 { 00:18:57.810 "method": "bdev_malloc_create", 00:18:57.810 "params": { 00:18:57.810 "name": "malloc0", 00:18:57.810 "num_blocks": 8192, 00:18:57.810 "block_size": 4096, 00:18:57.810 "physical_block_size": 4096, 00:18:57.810 "uuid": "6e26d5ab-816e-4f04-8d0c-24c8d9a06cf3", 00:18:57.810 "optimal_io_boundary": 0, 00:18:57.810 "md_size": 0, 00:18:57.810 "dif_type": 0, 00:18:57.810 "dif_is_head_of_md": false, 00:18:57.810 "dif_pi_format": 0 00:18:57.810 } 00:18:57.810 }, 00:18:57.810 { 00:18:57.810 "method": "bdev_wait_for_examine" 00:18:57.810 } 00:18:57.810 ] 00:18:57.810 }, 00:18:57.810 { 00:18:57.810 "subsystem": "nbd", 00:18:57.810 "config": [] 00:18:57.810 }, 00:18:57.810 { 00:18:57.810 "subsystem": "scheduler", 00:18:57.810 "config": [ 00:18:57.810 { 00:18:57.810 "method": "framework_set_scheduler", 00:18:57.810 "params": { 00:18:57.810 "name": "static" 00:18:57.810 } 00:18:57.810 } 00:18:57.810 ] 00:18:57.810 }, 00:18:57.810 { 00:18:57.810 "subsystem": "nvmf", 00:18:57.810 "config": [ 00:18:57.810 { 00:18:57.810 "method": "nvmf_set_config", 00:18:57.810 "params": { 00:18:57.810 "discovery_filter": "match_any", 00:18:57.810 "admin_cmd_passthru": { 00:18:57.810 "identify_ctrlr": false 00:18:57.810 }, 00:18:57.810 "dhchap_digests": [ 00:18:57.810 "sha256", 00:18:57.810 "sha384", 00:18:57.810 "sha512" 00:18:57.810 ], 00:18:57.810 "dhchap_dhgroups": [ 00:18:57.810 "null", 00:18:57.810 "ffdhe2048", 00:18:57.810 "ffdhe3072", 00:18:57.810 "ffdhe4096", 00:18:57.810 "ffdhe6144", 00:18:57.810 "ffdhe8192" 00:18:57.810 ] 00:18:57.810 } 00:18:57.810 }, 00:18:57.810 { 00:18:57.810 "method": "nvmf_set_max_subsystems", 00:18:57.810 "params": { 00:18:57.810 "max_subsystems": 1024 00:18:57.810 } 00:18:57.810 }, 00:18:57.810 { 00:18:57.810 "method": "nvmf_set_crdt", 00:18:57.810 "params": { 00:18:57.810 "crdt1": 0, 00:18:57.810 "crdt2": 0, 00:18:57.810 "crdt3": 0 00:18:57.810 } 00:18:57.810 }, 00:18:57.810 { 00:18:57.810 "method": "nvmf_create_transport", 00:18:57.810 "params": { 00:18:57.810 "trtype": "TCP", 00:18:57.810 "max_queue_depth": 128, 00:18:57.810 "max_io_qpairs_per_ctrlr": 127, 00:18:57.810 "in_capsule_data_size": 4096, 00:18:57.810 "max_io_size": 131072, 00:18:57.810 "io_unit_size": 131072, 00:18:57.810 "max_aq_depth": 128, 00:18:57.810 "num_shared_buffers": 511, 00:18:57.810 "buf_cache_size": 4294967295, 00:18:57.810 "dif_insert_or_strip": false, 00:18:57.810 "zcopy": false, 00:18:57.810 "c2h_success": false, 00:18:57.810 "sock_priority": 0, 00:18:57.810 "abort_timeout_sec": 1, 00:18:57.810 "ack_timeout": 0, 00:18:57.810 "data_wr_pool_size": 0 00:18:57.810 } 00:18:57.810 }, 00:18:57.810 { 00:18:57.810 "method": "nvmf_create_subsystem", 00:18:57.810 "params": { 00:18:57.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.810 "allow_any_host": false, 00:18:57.810 "serial_number": "00000000000000000000", 00:18:57.810 "model_number": "SPDK bdev Controller", 00:18:57.810 "max_namespaces": 32, 00:18:57.810 "min_cntlid": 1, 00:18:57.810 "max_cntlid": 65519, 00:18:57.810 "ana_reporting": false 00:18:57.810 } 00:18:57.810 }, 00:18:57.810 { 00:18:57.810 "method": "nvmf_subsystem_add_host", 00:18:57.810 "params": { 00:18:57.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.810 "host": "nqn.2016-06.io.spdk:host1", 00:18:57.810 "psk": "key0" 00:18:57.810 } 00:18:57.810 }, 00:18:57.810 { 00:18:57.810 "method": "nvmf_subsystem_add_ns", 00:18:57.810 "params": { 00:18:57.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.810 "namespace": { 00:18:57.810 "nsid": 1, 00:18:57.810 "bdev_name": "malloc0", 00:18:57.810 "nguid": "6E26D5AB816E4F048D0C24C8D9A06CF3", 00:18:57.810 "uuid": "6e26d5ab-816e-4f04-8d0c-24c8d9a06cf3", 00:18:57.810 "no_auto_visible": false 00:18:57.810 } 00:18:57.810 } 00:18:57.810 }, 00:18:57.810 { 00:18:57.810 "method": "nvmf_subsystem_add_listener", 00:18:57.810 "params": { 00:18:57.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.810 "listen_address": { 00:18:57.810 "trtype": "TCP", 00:18:57.810 "adrfam": "IPv4", 00:18:57.810 "traddr": "10.0.0.3", 00:18:57.810 "trsvcid": "4420" 00:18:57.810 }, 00:18:57.810 "secure_channel": false, 00:18:57.810 "sock_impl": "ssl" 00:18:57.810 } 00:18:57.810 } 00:18:57.810 ] 00:18:57.810 } 00:18:57.810 ] 00:18:57.810 }' 00:18:57.810 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:58.069 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:58.069 "subsystems": [ 00:18:58.069 { 00:18:58.069 "subsystem": "keyring", 00:18:58.069 "config": [ 00:18:58.069 { 00:18:58.069 "method": "keyring_file_add_key", 00:18:58.069 "params": { 00:18:58.069 "name": "key0", 00:18:58.069 "path": "/tmp/tmp.JzkmDIITwb" 00:18:58.069 } 00:18:58.069 } 00:18:58.069 ] 00:18:58.069 }, 00:18:58.069 { 00:18:58.069 "subsystem": "iobuf", 00:18:58.069 "config": [ 00:18:58.069 { 00:18:58.069 "method": "iobuf_set_options", 00:18:58.069 "params": { 00:18:58.069 "small_pool_count": 8192, 00:18:58.069 "large_pool_count": 1024, 00:18:58.069 "small_bufsize": 8192, 00:18:58.069 "large_bufsize": 135168, 00:18:58.069 "enable_numa": false 00:18:58.069 } 00:18:58.069 } 00:18:58.069 ] 00:18:58.069 }, 00:18:58.069 { 00:18:58.069 "subsystem": "sock", 00:18:58.069 "config": [ 00:18:58.069 { 00:18:58.069 "method": "sock_set_default_impl", 00:18:58.069 "params": { 00:18:58.069 "impl_name": "uring" 00:18:58.069 } 00:18:58.069 }, 00:18:58.069 { 00:18:58.069 "method": "sock_impl_set_options", 00:18:58.069 "params": { 00:18:58.069 "impl_name": "ssl", 00:18:58.069 "recv_buf_size": 4096, 00:18:58.069 "send_buf_size": 4096, 00:18:58.069 "enable_recv_pipe": true, 00:18:58.069 "enable_quickack": false, 00:18:58.069 "enable_placement_id": 0, 00:18:58.069 "enable_zerocopy_send_server": true, 00:18:58.069 "enable_zerocopy_send_client": false, 00:18:58.070 "zerocopy_threshold": 0, 00:18:58.070 "tls_version": 0, 00:18:58.070 "enable_ktls": false 00:18:58.070 } 00:18:58.070 }, 00:18:58.070 { 00:18:58.070 "method": "sock_impl_set_options", 00:18:58.070 "params": { 00:18:58.070 "impl_name": "posix", 00:18:58.070 "recv_buf_size": 2097152, 00:18:58.070 "send_buf_size": 2097152, 00:18:58.070 "enable_recv_pipe": true, 00:18:58.070 "enable_quickack": false, 00:18:58.070 "enable_placement_id": 0, 00:18:58.070 "enable_zerocopy_send_server": true, 00:18:58.070 "enable_zerocopy_send_client": false, 00:18:58.070 "zerocopy_threshold": 0, 00:18:58.070 "tls_version": 0, 00:18:58.070 "enable_ktls": false 00:18:58.070 } 00:18:58.070 }, 00:18:58.070 { 00:18:58.070 "method": "sock_impl_set_options", 00:18:58.070 "params": { 00:18:58.070 "impl_name": "uring", 00:18:58.070 "recv_buf_size": 2097152, 00:18:58.070 "send_buf_size": 2097152, 00:18:58.070 "enable_recv_pipe": true, 00:18:58.070 "enable_quickack": false, 00:18:58.070 "enable_placement_id": 0, 00:18:58.070 "enable_zerocopy_send_server": false, 00:18:58.070 "enable_zerocopy_send_client": false, 00:18:58.070 "zerocopy_threshold": 0, 00:18:58.070 "tls_version": 0, 00:18:58.070 "enable_ktls": false 00:18:58.070 } 00:18:58.070 } 00:18:58.070 ] 00:18:58.070 }, 00:18:58.070 { 00:18:58.070 "subsystem": "vmd", 00:18:58.070 "config": [] 00:18:58.070 }, 00:18:58.070 { 00:18:58.070 "subsystem": "accel", 00:18:58.070 "config": [ 00:18:58.070 { 00:18:58.070 "method": "accel_set_options", 00:18:58.070 "params": { 00:18:58.070 "small_cache_size": 128, 00:18:58.070 "large_cache_size": 16, 00:18:58.070 "task_count": 2048, 00:18:58.070 "sequence_count": 2048, 00:18:58.070 "buf_count": 2048 00:18:58.070 } 00:18:58.070 } 00:18:58.070 ] 00:18:58.070 }, 00:18:58.070 { 00:18:58.070 "subsystem": "bdev", 00:18:58.070 "config": [ 00:18:58.070 { 00:18:58.070 "method": "bdev_set_options", 00:18:58.070 "params": { 00:18:58.070 "bdev_io_pool_size": 65535, 00:18:58.070 "bdev_io_cache_size": 256, 00:18:58.070 "bdev_auto_examine": true, 00:18:58.070 "iobuf_small_cache_size": 128, 00:18:58.070 "iobuf_large_cache_size": 16 00:18:58.070 } 00:18:58.070 }, 00:18:58.070 { 00:18:58.070 "method": "bdev_raid_set_options", 00:18:58.070 "params": { 00:18:58.070 "process_window_size_kb": 1024, 00:18:58.070 "process_max_bandwidth_mb_sec": 0 00:18:58.070 } 00:18:58.070 }, 00:18:58.070 { 00:18:58.070 "method": "bdev_iscsi_set_options", 00:18:58.070 "params": { 00:18:58.070 "timeout_sec": 30 00:18:58.070 } 00:18:58.070 }, 00:18:58.070 { 00:18:58.070 "method": "bdev_nvme_set_options", 00:18:58.070 "params": { 00:18:58.070 "action_on_timeout": "none", 00:18:58.070 "timeout_us": 0, 00:18:58.070 "timeout_admin_us": 0, 00:18:58.070 "keep_alive_timeout_ms": 10000, 00:18:58.070 "arbitration_burst": 0, 00:18:58.070 "low_priority_weight": 0, 00:18:58.070 "medium_priority_weight": 0, 00:18:58.070 "high_priority_weight": 0, 00:18:58.070 "nvme_adminq_poll_period_us": 10000, 00:18:58.070 "nvme_ioq_poll_period_us": 0, 00:18:58.070 "io_queue_requests": 512, 00:18:58.070 "delay_cmd_submit": true, 00:18:58.070 "transport_retry_count": 4, 00:18:58.070 "bdev_retry_count": 3, 00:18:58.070 "transport_ack_timeout": 0, 00:18:58.070 "ctrlr_loss_timeout_sec": 0, 00:18:58.070 "reconnect_delay_sec": 0, 00:18:58.070 "fast_io_fail_timeout_sec": 0, 00:18:58.070 "disable_auto_failback": false, 00:18:58.070 "generate_uuids": false, 00:18:58.070 "transport_tos": 0, 00:18:58.070 "nvme_error_stat": false, 00:18:58.070 "rdma_srq_size": 0, 00:18:58.070 "io_path_stat": false, 00:18:58.070 "allow_accel_sequence": false, 00:18:58.070 "rdma_max_cq_size": 0, 00:18:58.070 "rdma_cm_event_timeout_ms": 0, 00:18:58.070 "dhchap_digests": [ 00:18:58.070 "sha256", 00:18:58.070 "sha384", 00:18:58.070 "sha512" 00:18:58.070 ], 00:18:58.070 "dhchap_dhgroups": [ 00:18:58.070 "null", 00:18:58.070 "ffdhe2048", 00:18:58.070 "ffdhe3072", 00:18:58.070 "ffdhe4096", 00:18:58.070 "ffdhe6144", 00:18:58.070 "ffdhe8192" 00:18:58.070 ] 00:18:58.070 } 00:18:58.070 }, 00:18:58.070 { 00:18:58.070 "method": "bdev_nvme_attach_controller", 00:18:58.070 "params": { 00:18:58.070 "name": "nvme0", 00:18:58.070 "trtype": "TCP", 00:18:58.070 "adrfam": "IPv4", 00:18:58.070 "traddr": "10.0.0.3", 00:18:58.070 "trsvcid": "4420", 00:18:58.070 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.070 "prchk_reftag": false, 00:18:58.070 "prchk_guard": false, 00:18:58.070 "ctrlr_loss_timeout_sec": 0, 00:18:58.070 "reconnect_delay_sec": 0, 00:18:58.070 "fast_io_fail_timeout_sec": 0, 00:18:58.070 "psk": "key0", 00:18:58.070 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:58.070 "hdgst": false, 00:18:58.070 "ddgst": false, 00:18:58.070 "multipath": "multipath" 00:18:58.070 } 00:18:58.070 }, 00:18:58.070 { 00:18:58.070 "method": "bdev_nvme_set_hotplug", 00:18:58.070 "params": { 00:18:58.070 "period_us": 100000, 00:18:58.070 "enable": false 00:18:58.070 } 00:18:58.070 }, 00:18:58.070 { 00:18:58.070 "method": "bdev_enable_histogram", 00:18:58.070 "params": { 00:18:58.070 "name": "nvme0n1", 00:18:58.070 "enable": true 00:18:58.070 } 00:18:58.070 }, 00:18:58.070 { 00:18:58.070 "method": "bdev_wait_for_examine" 00:18:58.070 } 00:18:58.070 ] 00:18:58.070 }, 00:18:58.070 { 00:18:58.070 "subsystem": "nbd", 00:18:58.070 "config": [] 00:18:58.070 } 00:18:58.070 ] 00:18:58.070 }' 00:18:58.070 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72707 00:18:58.070 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72707 ']' 00:18:58.070 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72707 00:18:58.070 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:58.070 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.070 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72707 00:18:58.070 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:58.070 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:58.070 killing process with pid 72707 00:18:58.070 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72707' 00:18:58.070 Received shutdown signal, test time was about 1.000000 seconds 00:18:58.070 00:18:58.070 Latency(us) 00:18:58.070 [2024-11-27T06:14:03.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.070 [2024-11-27T06:14:03.167Z] =================================================================================================================== 00:18:58.070 [2024-11-27T06:14:03.167Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:58.070 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72707 00:18:58.070 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72707 00:18:58.329 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72688 00:18:58.329 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72688 ']' 00:18:58.329 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72688 00:18:58.329 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:58.329 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.329 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72688 00:18:58.329 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:58.329 killing process with pid 72688 00:18:58.329 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:58.329 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72688' 00:18:58.329 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72688 00:18:58.329 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72688 00:18:58.589 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:58.589 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:58.589 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:58.589 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:58.589 "subsystems": [ 00:18:58.589 { 00:18:58.589 "subsystem": "keyring", 00:18:58.589 "config": [ 00:18:58.589 { 00:18:58.589 "method": "keyring_file_add_key", 00:18:58.589 "params": { 00:18:58.589 "name": "key0", 00:18:58.589 "path": "/tmp/tmp.JzkmDIITwb" 00:18:58.589 } 00:18:58.589 } 00:18:58.589 ] 00:18:58.589 }, 00:18:58.589 { 00:18:58.589 "subsystem": "iobuf", 00:18:58.589 "config": [ 00:18:58.589 { 00:18:58.589 "method": "iobuf_set_options", 00:18:58.589 "params": { 00:18:58.589 "small_pool_count": 8192, 00:18:58.589 "large_pool_count": 1024, 00:18:58.589 "small_bufsize": 8192, 00:18:58.589 "large_bufsize": 135168, 00:18:58.589 "enable_numa": false 00:18:58.589 } 00:18:58.589 } 00:18:58.589 ] 00:18:58.589 }, 00:18:58.589 { 00:18:58.589 "subsystem": "sock", 00:18:58.589 "config": [ 00:18:58.589 { 00:18:58.589 "method": "sock_set_default_impl", 00:18:58.589 "params": { 00:18:58.589 "impl_name": "uring" 00:18:58.589 } 00:18:58.589 }, 00:18:58.589 { 00:18:58.589 "method": "sock_impl_set_options", 00:18:58.589 "params": { 00:18:58.589 "impl_name": "ssl", 00:18:58.589 "recv_buf_size": 4096, 00:18:58.589 "send_buf_size": 4096, 00:18:58.589 "enable_recv_pipe": true, 00:18:58.589 "enable_quickack": false, 00:18:58.589 "enable_placement_id": 0, 00:18:58.589 "enable_zerocopy_send_server": true, 00:18:58.589 "enable_zerocopy_send_client": false, 00:18:58.589 "zerocopy_threshold": 0, 00:18:58.589 "tls_version": 0, 00:18:58.589 "enable_ktls": false 00:18:58.589 } 00:18:58.589 }, 00:18:58.589 { 00:18:58.589 "method": "sock_impl_set_options", 00:18:58.589 "params": { 00:18:58.589 "impl_name": "posix", 00:18:58.589 "recv_buf_size": 2097152, 00:18:58.589 "send_buf_size": 2097152, 00:18:58.589 "enable_recv_pipe": true, 00:18:58.589 "enable_quickack": false, 00:18:58.589 "enable_placement_id": 0, 00:18:58.589 "enable_zerocopy_send_server": true, 00:18:58.589 "enable_zerocopy_send_client": false, 00:18:58.589 "zerocopy_threshold": 0, 00:18:58.589 "tls_version": 0, 00:18:58.589 "enable_ktls": false 00:18:58.589 } 00:18:58.589 }, 00:18:58.589 { 00:18:58.589 "method": "sock_impl_set_options", 00:18:58.589 "params": { 00:18:58.589 "impl_name": "uring", 00:18:58.589 "recv_buf_size": 2097152, 00:18:58.589 "send_buf_size": 2097152, 00:18:58.589 "enable_recv_pipe": true, 00:18:58.589 "enable_quickack": false, 00:18:58.589 "enable_placement_id": 0, 00:18:58.589 "enable_zerocopy_send_server": false, 00:18:58.589 "enable_zerocopy_send_client": false, 00:18:58.589 "zerocopy_threshold": 0, 00:18:58.589 "tls_version": 0, 00:18:58.589 "enable_ktls": false 00:18:58.589 } 00:18:58.589 } 00:18:58.589 ] 00:18:58.589 }, 00:18:58.589 { 00:18:58.589 "subsystem": "vmd", 00:18:58.589 "config": [] 00:18:58.589 }, 00:18:58.589 { 00:18:58.589 "subsystem": "accel", 00:18:58.589 "config": [ 00:18:58.589 { 00:18:58.589 "method": "accel_set_options", 00:18:58.589 "params": { 00:18:58.589 "small_cache_size": 128, 00:18:58.589 "large_cache_size": 16, 00:18:58.589 "task_count": 2048, 00:18:58.590 "sequence_count": 2048, 00:18:58.590 "buf_count": 2048 00:18:58.590 } 00:18:58.590 } 00:18:58.590 ] 00:18:58.590 }, 00:18:58.590 { 00:18:58.590 "subsystem": "bdev", 00:18:58.590 "config": [ 00:18:58.590 { 00:18:58.590 "method": "bdev_set_options", 00:18:58.590 "params": { 00:18:58.590 "bdev_io_pool_size": 65535, 00:18:58.590 "bdev_io_cache_size": 256, 00:18:58.590 "bdev_auto_examine": true, 00:18:58.590 "iobuf_small_cache_size": 128, 00:18:58.590 "iobuf_large_cache_size": 16 00:18:58.590 } 00:18:58.590 }, 00:18:58.590 { 00:18:58.590 "method": "bdev_raid_set_options", 00:18:58.590 "params": { 00:18:58.590 "process_window_size_kb": 1024, 00:18:58.590 "process_max_bandwidth_mb_sec": 0 00:18:58.590 } 00:18:58.590 }, 00:18:58.590 { 00:18:58.590 "method": "bdev_iscsi_set_options", 00:18:58.590 "params": { 00:18:58.590 "timeout_sec": 30 00:18:58.590 } 00:18:58.590 }, 00:18:58.590 { 00:18:58.590 "method": "bdev_nvme_set_options", 00:18:58.590 "params": { 00:18:58.590 "action_on_timeout": "none", 00:18:58.590 "timeout_us": 0, 00:18:58.590 "timeout_admin_us": 0, 00:18:58.590 "keep_alive_timeout_ms": 10000, 00:18:58.590 "arbitration_burst": 0, 00:18:58.590 "low_priority_weight": 0, 00:18:58.590 "medium_priority_weight": 0, 00:18:58.590 "high_priority_weight": 0, 00:18:58.590 "nvme_adminq_poll_period_us": 10000, 00:18:58.590 "nvme_ioq_poll_period_us": 0, 00:18:58.590 "io_queue_requests": 0, 00:18:58.590 "delay_cmd_submit": true, 00:18:58.590 "transport_retry_count": 4, 00:18:58.590 "bdev_retry_count": 3, 00:18:58.590 "transport_ack_timeout": 0, 00:18:58.590 "ctrlr_loss_timeout_sec": 0, 00:18:58.590 "reconnect_delay_sec": 0, 00:18:58.590 "fast_io_fail_timeout_sec": 0, 00:18:58.590 "disable_auto_failback": false, 00:18:58.590 "generate_uuids": false, 00:18:58.590 "transport_tos": 0, 00:18:58.590 "nvme_error_stat": false, 00:18:58.590 "rdma_srq_size": 0, 00:18:58.590 "io_path_stat": false, 00:18:58.590 "allow_accel_sequence": false, 00:18:58.590 "rdma_max_cq_size": 0, 00:18:58.590 "rdma_cm_event_timeout_ms": 0, 00:18:58.590 "dhchap_digests": [ 00:18:58.590 "sha256", 00:18:58.590 "sha384", 00:18:58.590 "sha512" 00:18:58.590 ], 00:18:58.590 "dhchap_dhgroups": [ 00:18:58.590 "null", 00:18:58.590 "ffdhe2048", 00:18:58.590 "ffdhe3072", 00:18:58.590 "ffdhe4096", 00:18:58.590 "ffdhe6144", 00:18:58.590 "ffdhe8192" 00:18:58.590 ] 00:18:58.590 } 00:18:58.590 }, 00:18:58.590 { 00:18:58.590 "method": "bdev_nvme_set_hotplug", 00:18:58.590 "params": { 00:18:58.590 "period_us": 100000, 00:18:58.590 "enable": false 00:18:58.590 } 00:18:58.590 }, 00:18:58.590 { 00:18:58.590 "method": "bdev_malloc_create", 00:18:58.590 "params": { 00:18:58.590 "name": "malloc0", 00:18:58.590 "num_blocks": 8192, 00:18:58.590 "block_size": 4096, 00:18:58.590 "physical_block_size": 4096, 00:18:58.590 "uuid": "6e26d5ab-816e-4f04-8d0c-24c8d9a06cf3", 00:18:58.590 "optimal_io_boundary": 0, 00:18:58.590 "md_size": 0, 00:18:58.590 "dif_type": 0, 00:18:58.590 "dif_is_head_of_md": false, 00:18:58.590 "dif_pi_format": 0 00:18:58.590 } 00:18:58.590 }, 00:18:58.590 { 00:18:58.590 "method": "bdev_wait_for_examine" 00:18:58.590 } 00:18:58.590 ] 00:18:58.590 }, 00:18:58.590 { 00:18:58.590 "subsystem": "nbd", 00:18:58.590 "config": [] 00:18:58.590 }, 00:18:58.590 { 00:18:58.590 "subsystem": "scheduler", 00:18:58.590 "config": [ 00:18:58.590 { 00:18:58.590 "method": "framework_set_scheduler", 00:18:58.590 "params": { 00:18:58.590 "name": "static" 00:18:58.590 } 00:18:58.590 } 00:18:58.590 ] 00:18:58.590 }, 00:18:58.590 { 00:18:58.590 "subsystem": "nvmf", 00:18:58.590 "config": [ 00:18:58.590 { 00:18:58.590 "method": "nvmf_set_config", 00:18:58.590 "params": { 00:18:58.590 "discovery_filter": "match_any", 00:18:58.590 "admin_cmd_passthru": { 00:18:58.590 "identify_ctrlr": false 00:18:58.590 }, 00:18:58.590 "dhchap_digests": [ 00:18:58.590 "sha256", 00:18:58.590 "sha384", 00:18:58.590 "sha512" 00:18:58.590 ], 00:18:58.590 "dhchap_dhgroups": [ 00:18:58.590 "null", 00:18:58.590 "ffdhe2048", 00:18:58.590 "ffdhe3072", 00:18:58.590 "ffdhe4096", 00:18:58.590 "ffdhe6144", 00:18:58.590 "ffdhe8192" 00:18:58.590 ] 00:18:58.590 } 00:18:58.590 }, 00:18:58.590 { 00:18:58.590 "method": "nvmf_set_max_subsystems", 00:18:58.590 "params": { 00:18:58.590 "max_subsystems": 1024 00:18:58.590 } 00:18:58.590 }, 00:18:58.590 { 00:18:58.590 "method": "nvmf_set_crdt", 00:18:58.590 "params": { 00:18:58.590 "crdt1": 0, 00:18:58.590 "crdt2": 0, 00:18:58.590 "crdt3": 0 00:18:58.590 } 00:18:58.590 }, 00:18:58.590 { 00:18:58.590 "method": "nvmf_create_transport", 00:18:58.590 "params": { 00:18:58.590 "trtype": "TCP", 00:18:58.590 "max_queue_depth": 128, 00:18:58.590 "max_io_qpairs_per_ctrlr": 127, 00:18:58.590 "in_capsule_data_size": 4096, 00:18:58.590 "max_io_size": 131072, 00:18:58.590 "io_unit_size": 131072, 00:18:58.590 "max_aq_depth": 128, 00:18:58.590 "num_shared_buffers": 511, 00:18:58.590 "buf_cache_size": 4294967295, 00:18:58.590 "dif_insert_or_strip": false, 00:18:58.590 "zcopy": false, 00:18:58.590 "c2h_success": false, 00:18:58.590 "sock_priority": 0, 00:18:58.590 "abort_timeout_sec": 1, 00:18:58.590 "ack_timeout": 0, 00:18:58.590 "data_wr_pool_size": 0 00:18:58.590 } 00:18:58.590 }, 00:18:58.590 { 00:18:58.590 "method": "nvmf_create_subsystem", 00:18:58.590 "params": { 00:18:58.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.590 "allow_any_host": false, 00:18:58.590 "serial_number": "00000000000000000000", 00:18:58.590 "model_number": "SPDK bdev Controller", 00:18:58.590 "max_namespaces": 32, 00:18:58.590 "min_cntlid": 1, 00:18:58.590 "max_cntlid": 65519, 00:18:58.590 "ana_reporting": false 00:18:58.590 } 00:18:58.590 }, 00:18:58.590 { 00:18:58.590 "method": "nvmf_subsystem_add_host", 00:18:58.590 "params": { 00:18:58.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.590 "host": "nqn.2016-06.io.spdk:host1", 00:18:58.590 "psk": "key0" 00:18:58.590 } 00:18:58.590 }, 00:18:58.590 { 00:18:58.590 "method": "nvmf_subsystem_add_ns", 00:18:58.590 "params": { 00:18:58.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.590 "namespace": { 00:18:58.590 "nsid": 1, 00:18:58.590 "bdev_name": "malloc0", 00:18:58.590 "nguid": "6E26D5AB816E4F048D0C24C8D9A06CF3", 00:18:58.590 "uuid": "6e26d5ab-816e-4f04-8d0c-24c8d9a06cf3", 00:18:58.590 "no_auto_visible": false 00:18:58.590 } 00:18:58.590 } 00:18:58.590 }, 00:18:58.590 { 00:18:58.590 "method": "nvmf_subsystem_add_listener", 00:18:58.590 "params": { 00:18:58.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.590 "listen_address": { 00:18:58.590 "trtype": "TCP", 00:18:58.590 "adrfam": "IPv4", 00:18:58.590 "traddr": "10.0.0.3", 00:18:58.590 "trsvcid": "4420" 00:18:58.590 }, 00:18:58.590 "secure_channel": false, 00:18:58.590 "sock_impl": "ssl" 00:18:58.590 } 00:18:58.590 } 00:18:58.590 ] 00:18:58.590 } 00:18:58.590 ] 00:18:58.590 }' 00:18:58.590 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.590 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72760 00:18:58.590 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72760 00:18:58.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.590 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72760 ']' 00:18:58.590 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.590 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:58.590 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.590 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.590 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.590 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.590 [2024-11-27 06:14:03.665094] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:18:58.590 [2024-11-27 06:14:03.665620] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.848 [2024-11-27 06:14:03.813343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.848 [2024-11-27 06:14:03.864430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.848 [2024-11-27 06:14:03.864481] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.848 [2024-11-27 06:14:03.864491] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.848 [2024-11-27 06:14:03.864499] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.848 [2024-11-27 06:14:03.864505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.848 [2024-11-27 06:14:03.864911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.106 [2024-11-27 06:14:04.035400] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:59.106 [2024-11-27 06:14:04.116728] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.106 [2024-11-27 06:14:04.148676] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:59.106 [2024-11-27 06:14:04.148922] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:59.674 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.675 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:59.675 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:59.675 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:59.675 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.675 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.675 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72792 00:18:59.675 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72792 /var/tmp/bdevperf.sock 00:18:59.675 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72792 ']' 00:18:59.675 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.675 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:59.675 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.675 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:59.675 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:59.675 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.675 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:59.675 "subsystems": [ 00:18:59.675 { 00:18:59.675 "subsystem": "keyring", 00:18:59.675 "config": [ 00:18:59.675 { 00:18:59.675 "method": "keyring_file_add_key", 00:18:59.675 "params": { 00:18:59.675 "name": "key0", 00:18:59.675 "path": "/tmp/tmp.JzkmDIITwb" 00:18:59.675 } 00:18:59.675 } 00:18:59.675 ] 00:18:59.675 }, 00:18:59.675 { 00:18:59.675 "subsystem": "iobuf", 00:18:59.675 "config": [ 00:18:59.675 { 00:18:59.675 "method": "iobuf_set_options", 00:18:59.675 "params": { 00:18:59.675 "small_pool_count": 8192, 00:18:59.675 "large_pool_count": 1024, 00:18:59.675 "small_bufsize": 8192, 00:18:59.675 "large_bufsize": 135168, 00:18:59.675 "enable_numa": false 00:18:59.675 } 00:18:59.675 } 00:18:59.675 ] 00:18:59.675 }, 00:18:59.675 { 00:18:59.675 "subsystem": "sock", 00:18:59.675 "config": [ 00:18:59.675 { 00:18:59.675 "method": "sock_set_default_impl", 00:18:59.675 "params": { 00:18:59.675 "impl_name": "uring" 00:18:59.675 } 00:18:59.675 }, 00:18:59.675 { 00:18:59.675 "method": "sock_impl_set_options", 00:18:59.675 "params": { 00:18:59.675 "impl_name": "ssl", 00:18:59.675 "recv_buf_size": 4096, 00:18:59.675 "send_buf_size": 4096, 00:18:59.675 "enable_recv_pipe": true, 00:18:59.675 "enable_quickack": false, 00:18:59.675 "enable_placement_id": 0, 00:18:59.675 "enable_zerocopy_send_server": true, 00:18:59.675 "enable_zerocopy_send_client": false, 00:18:59.675 "zerocopy_threshold": 0, 00:18:59.675 "tls_version": 0, 00:18:59.675 "enable_ktls": false 00:18:59.675 } 00:18:59.675 }, 00:18:59.675 { 00:18:59.675 "method": "sock_impl_set_options", 00:18:59.675 "params": { 00:18:59.675 "impl_name": "posix", 00:18:59.675 "recv_buf_size": 2097152, 00:18:59.675 "send_buf_size": 2097152, 00:18:59.675 "enable_recv_pipe": true, 00:18:59.675 "enable_quickack": false, 00:18:59.675 "enable_placement_id": 0, 00:18:59.675 "enable_zerocopy_send_server": true, 00:18:59.675 "enable_zerocopy_send_client": false, 00:18:59.675 "zerocopy_threshold": 0, 00:18:59.675 "tls_version": 0, 00:18:59.675 "enable_ktls": false 00:18:59.675 } 00:18:59.675 }, 00:18:59.675 { 00:18:59.675 "method": "sock_impl_set_options", 00:18:59.675 "params": { 00:18:59.675 "impl_name": "uring", 00:18:59.675 "recv_buf_size": 2097152, 00:18:59.675 "send_buf_size": 2097152, 00:18:59.675 "enable_recv_pipe": true, 00:18:59.675 "enable_quickack": false, 00:18:59.675 "enable_placement_id": 0, 00:18:59.675 "enable_zerocopy_send_server": false, 00:18:59.675 "enable_zerocopy_send_client": false, 00:18:59.675 "zerocopy_threshold": 0, 00:18:59.675 "tls_version": 0, 00:18:59.675 "enable_ktls": false 00:18:59.675 } 00:18:59.675 } 00:18:59.675 ] 00:18:59.675 }, 00:18:59.675 { 00:18:59.675 "subsystem": "vmd", 00:18:59.675 "config": [] 00:18:59.675 }, 00:18:59.675 { 00:18:59.675 "subsystem": "accel", 00:18:59.675 "config": [ 00:18:59.675 { 00:18:59.675 "method": "accel_set_options", 00:18:59.675 "params": { 00:18:59.675 "small_cache_size": 128, 00:18:59.675 "large_cache_size": 16, 00:18:59.675 "task_count": 2048, 00:18:59.675 "sequence_count": 2048, 00:18:59.675 "buf_count": 2048 00:18:59.675 } 00:18:59.675 } 00:18:59.675 ] 00:18:59.675 }, 00:18:59.675 { 00:18:59.675 "subsystem": "bdev", 00:18:59.675 "config": [ 00:18:59.675 { 00:18:59.675 "method": "bdev_set_options", 00:18:59.675 "params": { 00:18:59.675 "bdev_io_pool_size": 65535, 00:18:59.675 "bdev_io_cache_size": 256, 00:18:59.675 "bdev_auto_examine": true, 00:18:59.675 "iobuf_small_cache_size": 128, 00:18:59.675 "iobuf_large_cache_size": 16 00:18:59.675 } 00:18:59.675 }, 00:18:59.675 { 00:18:59.675 "method": "bdev_raid_set_options", 00:18:59.675 "params": { 00:18:59.675 "process_window_size_kb": 1024, 00:18:59.675 "process_max_bandwidth_mb_sec": 0 00:18:59.675 } 00:18:59.675 }, 00:18:59.675 { 00:18:59.675 "method": "bdev_iscsi_set_options", 00:18:59.675 "params": { 00:18:59.675 "timeout_sec": 30 00:18:59.675 } 00:18:59.675 }, 00:18:59.675 { 00:18:59.675 "method": "bdev_nvme_set_options", 00:18:59.675 "params": { 00:18:59.675 "action_on_timeout": "none", 00:18:59.675 "timeout_us": 0, 00:18:59.675 "timeout_admin_us": 0, 00:18:59.675 "keep_alive_timeout_ms": 10000, 00:18:59.675 "arbitration_burst": 0, 00:18:59.675 "low_priority_weight": 0, 00:18:59.675 "medium_priority_weight": 0, 00:18:59.675 "high_priority_weight": 0, 00:18:59.675 "nvme_adminq_poll_period_us": 10000, 00:18:59.675 "nvme_ioq_poll_period_us": 0, 00:18:59.675 "io_queue_requests": 512, 00:18:59.675 "delay_cmd_submit": true, 00:18:59.675 "transport_retry_count": 4, 00:18:59.675 "bdev_retry_count": 3, 00:18:59.675 "transport_ack_timeout": 0, 00:18:59.675 "ctrlr_loss_timeout_sec": 0, 00:18:59.675 "reconnect_delay_sec": 0, 00:18:59.675 "fast_io_fail_timeout_sec": 0, 00:18:59.675 "disable_auto_failback": false, 00:18:59.675 "generate_uuids": false, 00:18:59.675 "transport_tos": 0, 00:18:59.675 "nvme_error_stat": false, 00:18:59.675 "rdma_srq_size": 0, 00:18:59.675 "io_path_stat": false, 00:18:59.675 "allow_accel_sequence": false, 00:18:59.675 "rdma_max_cq_size": 0, 00:18:59.675 "rdma_cm_event_timeout_ms": 0, 00:18:59.675 "dhchap_digests": [ 00:18:59.675 "sha256", 00:18:59.675 "sha384", 00:18:59.675 "sha512" 00:18:59.675 ], 00:18:59.675 "dhchap_dhgroups": [ 00:18:59.675 "null", 00:18:59.675 "ffdhe2048", 00:18:59.675 "ffdhe3072", 00:18:59.675 "ffdhe4096", 00:18:59.675 "ffdhe6144", 00:18:59.675 "ffdhe8192" 00:18:59.675 ] 00:18:59.675 } 00:18:59.675 }, 00:18:59.675 { 00:18:59.675 "method": "bdev_nvme_attach_controller", 00:18:59.675 "params": { 00:18:59.675 "name": "nvme0", 00:18:59.675 "trtype": "TCP", 00:18:59.675 "adrfam": "IPv4", 00:18:59.675 "traddr": "10.0.0.3", 00:18:59.675 "trsvcid": "4420", 00:18:59.675 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.675 "prchk_reftag": false, 00:18:59.675 "prchk_guard": false, 00:18:59.676 "ctrlr_loss_timeout_sec": 0, 00:18:59.676 "reconnect_delay_sec": 0, 00:18:59.676 "fast_io_fail_timeout_sec": 0, 00:18:59.676 "psk": "key0", 00:18:59.676 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:59.676 "hdgst": false, 00:18:59.676 "ddgst": false, 00:18:59.676 "multipath": "multipath" 00:18:59.676 } 00:18:59.676 }, 00:18:59.676 { 00:18:59.676 "method": "bdev_nvme_set_hotplug", 00:18:59.676 "params": { 00:18:59.676 "period_us": 100000, 00:18:59.676 "enable": false 00:18:59.676 } 00:18:59.676 }, 00:18:59.676 { 00:18:59.676 "method": "bdev_enable_histogram", 00:18:59.676 "params": { 00:18:59.676 "name": "nvme0n1", 00:18:59.676 "enable": true 00:18:59.676 } 00:18:59.676 }, 00:18:59.676 { 00:18:59.676 "method": "bdev_wait_for_examine" 00:18:59.676 } 00:18:59.676 ] 00:18:59.676 }, 00:18:59.676 { 00:18:59.676 "subsystem": "nbd", 00:18:59.676 "config": [] 00:18:59.676 } 00:18:59.676 ] 00:18:59.676 }' 00:18:59.676 [2024-11-27 06:14:04.727806] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:18:59.676 [2024-11-27 06:14:04.728970] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72792 ] 00:18:59.934 [2024-11-27 06:14:04.881249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.934 [2024-11-27 06:14:04.936001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.191 [2024-11-27 06:14:05.075588] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:00.191 [2024-11-27 06:14:05.125513] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:00.758 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.758 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:00.758 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:00.758 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:01.017 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.017 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:01.017 Running I/O for 1 seconds... 00:19:02.439 3453.00 IOPS, 13.49 MiB/s 00:19:02.439 Latency(us) 00:19:02.439 [2024-11-27T06:14:07.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.439 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:02.439 Verification LBA range: start 0x0 length 0x2000 00:19:02.439 nvme0n1 : 1.04 3457.74 13.51 0.00 0.00 36585.10 10366.60 24546.21 00:19:02.439 [2024-11-27T06:14:07.536Z] =================================================================================================================== 00:19:02.439 [2024-11-27T06:14:07.536Z] Total : 3457.74 13.51 0.00 0.00 36585.10 10366.60 24546.21 00:19:02.439 { 00:19:02.439 "results": [ 00:19:02.439 { 00:19:02.439 "job": "nvme0n1", 00:19:02.439 "core_mask": "0x2", 00:19:02.439 "workload": "verify", 00:19:02.439 "status": "finished", 00:19:02.439 "verify_range": { 00:19:02.439 "start": 0, 00:19:02.439 "length": 8192 00:19:02.439 }, 00:19:02.439 "queue_depth": 128, 00:19:02.439 "io_size": 4096, 00:19:02.439 "runtime": 1.035647, 00:19:02.439 "iops": 3457.7418753687307, 00:19:02.439 "mibps": 13.506804200659104, 00:19:02.439 "io_failed": 0, 00:19:02.439 "io_timeout": 0, 00:19:02.439 "avg_latency_us": 36585.096412886196, 00:19:02.439 "min_latency_us": 10366.603636363636, 00:19:02.439 "max_latency_us": 24546.21090909091 00:19:02.439 } 00:19:02.439 ], 00:19:02.439 "core_count": 1 00:19:02.439 } 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:02.439 nvmf_trace.0 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72792 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72792 ']' 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72792 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72792 00:19:02.439 killing process with pid 72792 00:19:02.439 Received shutdown signal, test time was about 1.000000 seconds 00:19:02.439 00:19:02.439 Latency(us) 00:19:02.439 [2024-11-27T06:14:07.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.439 [2024-11-27T06:14:07.536Z] =================================================================================================================== 00:19:02.439 [2024-11-27T06:14:07.536Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72792' 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72792 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72792 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:02.439 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:02.439 rmmod nvme_tcp 00:19:02.698 rmmod nvme_fabrics 00:19:02.698 rmmod nvme_keyring 00:19:02.698 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:02.698 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:02.698 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:02.699 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72760 ']' 00:19:02.699 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72760 00:19:02.699 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72760 ']' 00:19:02.699 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72760 00:19:02.699 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:02.699 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.699 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72760 00:19:02.699 killing process with pid 72760 00:19:02.699 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:02.699 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:02.699 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72760' 00:19:02.699 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72760 00:19:02.699 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72760 00:19:02.957 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:02.957 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:02.957 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:02.957 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:02.957 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:02.957 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:02.957 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:02.957 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:02.957 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:02.957 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:02.957 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:02.957 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:02.957 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:02.957 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:02.957 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:02.957 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:02.957 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:02.957 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:02.957 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:02.957 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:02.957 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:03.216 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:03.216 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:03.216 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.216 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:03.216 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.216 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:19:03.216 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.PNSq9Wm43L /tmp/tmp.0Tdx4P45zX /tmp/tmp.JzkmDIITwb 00:19:03.216 00:19:03.216 real 1m25.081s 00:19:03.216 user 2m15.667s 00:19:03.216 sys 0m29.700s 00:19:03.216 ************************************ 00:19:03.216 END TEST nvmf_tls 00:19:03.216 ************************************ 00:19:03.216 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.216 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.216 06:14:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:03.216 06:14:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:03.216 06:14:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.216 06:14:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:03.216 ************************************ 00:19:03.216 START TEST nvmf_fips 00:19:03.216 ************************************ 00:19:03.216 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:03.216 * Looking for test storage... 00:19:03.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:19:03.216 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:03.216 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:19:03.216 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:03.476 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:03.476 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:03.476 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:03.476 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:03.476 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:03.476 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:03.476 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:03.476 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:03.476 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:03.476 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:03.476 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:03.476 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:03.476 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:03.476 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:03.476 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:03.476 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:03.476 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:03.476 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:03.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.477 --rc genhtml_branch_coverage=1 00:19:03.477 --rc genhtml_function_coverage=1 00:19:03.477 --rc genhtml_legend=1 00:19:03.477 --rc geninfo_all_blocks=1 00:19:03.477 --rc geninfo_unexecuted_blocks=1 00:19:03.477 00:19:03.477 ' 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:03.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.477 --rc genhtml_branch_coverage=1 00:19:03.477 --rc genhtml_function_coverage=1 00:19:03.477 --rc genhtml_legend=1 00:19:03.477 --rc geninfo_all_blocks=1 00:19:03.477 --rc geninfo_unexecuted_blocks=1 00:19:03.477 00:19:03.477 ' 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:03.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.477 --rc genhtml_branch_coverage=1 00:19:03.477 --rc genhtml_function_coverage=1 00:19:03.477 --rc genhtml_legend=1 00:19:03.477 --rc geninfo_all_blocks=1 00:19:03.477 --rc geninfo_unexecuted_blocks=1 00:19:03.477 00:19:03.477 ' 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:03.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.477 --rc genhtml_branch_coverage=1 00:19:03.477 --rc genhtml_function_coverage=1 00:19:03.477 --rc genhtml_legend=1 00:19:03.477 --rc geninfo_all_blocks=1 00:19:03.477 --rc geninfo_unexecuted_blocks=1 00:19:03.477 00:19:03.477 ' 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:03.477 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:03.477 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:03.478 Error setting digest 00:19:03.478 40D2A10D3B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:03.478 40D2A10D3B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:03.478 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:03.737 Cannot find device "nvmf_init_br" 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:03.737 Cannot find device "nvmf_init_br2" 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:03.737 Cannot find device "nvmf_tgt_br" 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:03.737 Cannot find device "nvmf_tgt_br2" 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:03.737 Cannot find device "nvmf_init_br" 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:03.737 Cannot find device "nvmf_init_br2" 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:03.737 Cannot find device "nvmf_tgt_br" 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:03.737 Cannot find device "nvmf_tgt_br2" 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:03.737 Cannot find device "nvmf_br" 00:19:03.737 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:19:03.738 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:03.738 Cannot find device "nvmf_init_if" 00:19:03.738 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:19:03.738 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:03.738 Cannot find device "nvmf_init_if2" 00:19:03.738 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:19:03.738 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:03.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:03.738 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:19:03.738 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:03.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:03.738 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:19:03.738 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:03.738 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:03.738 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:03.738 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:03.738 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:03.738 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:03.738 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:03.738 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:03.738 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:03.738 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:03.738 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:03.738 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:03.738 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:03.738 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:03.996 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:03.996 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:03.996 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:03.996 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:03.996 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:03.996 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:03.996 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:03.996 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:03.996 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:03.996 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:03.996 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:03.996 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:03.996 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:03.996 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:03.996 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:03.996 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:03.996 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:03.996 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:03.996 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:03.996 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:03.996 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:19:03.996 00:19:03.996 --- 10.0.0.3 ping statistics --- 00:19:03.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.996 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:03.996 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:03.996 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:03.996 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:19:03.996 00:19:03.996 --- 10.0.0.4 ping statistics --- 00:19:03.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.996 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:03.996 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:03.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:03.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:03.996 00:19:03.996 --- 10.0.0.1 ping statistics --- 00:19:03.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.996 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:03.997 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:03.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:03.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:19:03.997 00:19:03.997 --- 10.0.0.2 ping statistics --- 00:19:03.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.997 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:19:03.997 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:03.997 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:19:03.997 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:03.997 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:03.997 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:03.997 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:03.997 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:03.997 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:03.997 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:03.997 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:03.997 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:03.997 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:03.997 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:03.997 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=73105 00:19:03.997 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:03.997 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 73105 00:19:03.997 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 73105 ']' 00:19:03.997 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.997 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.997 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.997 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.997 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:03.997 [2024-11-27 06:14:09.085985] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:19:03.997 [2024-11-27 06:14:09.086282] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.256 [2024-11-27 06:14:09.239534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.256 [2024-11-27 06:14:09.298247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.256 [2024-11-27 06:14:09.298567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.256 [2024-11-27 06:14:09.298799] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.256 [2024-11-27 06:14:09.299060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.256 [2024-11-27 06:14:09.299077] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.256 [2024-11-27 06:14:09.299567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.515 [2024-11-27 06:14:09.359641] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:05.082 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.082 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:05.082 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:05.082 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:05.082 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:05.082 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.082 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:05.082 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:05.082 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:05.082 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Mmh 00:19:05.082 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:05.082 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Mmh 00:19:05.082 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Mmh 00:19:05.082 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Mmh 00:19:05.082 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:05.649 [2024-11-27 06:14:10.461469] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:05.649 [2024-11-27 06:14:10.477428] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:05.649 [2024-11-27 06:14:10.477631] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:05.649 malloc0 00:19:05.649 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:05.649 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=73152 00:19:05.649 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:05.649 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 73152 /var/tmp/bdevperf.sock 00:19:05.649 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 73152 ']' 00:19:05.649 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:05.649 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.649 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:05.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:05.649 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.649 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:05.649 [2024-11-27 06:14:10.634712] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:19:05.649 [2024-11-27 06:14:10.634815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73152 ] 00:19:05.907 [2024-11-27 06:14:10.789815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.907 [2024-11-27 06:14:10.860751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.907 [2024-11-27 06:14:10.919616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:06.841 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.841 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:06.841 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Mmh 00:19:07.099 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:07.356 [2024-11-27 06:14:12.253337] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:07.356 TLSTESTn1 00:19:07.356 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:07.614 Running I/O for 10 seconds... 00:19:09.480 3840.00 IOPS, 15.00 MiB/s [2024-11-27T06:14:15.511Z] 3904.00 IOPS, 15.25 MiB/s [2024-11-27T06:14:16.884Z] 3943.33 IOPS, 15.40 MiB/s [2024-11-27T06:14:17.842Z] 4011.75 IOPS, 15.67 MiB/s [2024-11-27T06:14:18.781Z] 4060.60 IOPS, 15.86 MiB/s [2024-11-27T06:14:19.716Z] 4092.17 IOPS, 15.99 MiB/s [2024-11-27T06:14:20.652Z] 4096.14 IOPS, 16.00 MiB/s [2024-11-27T06:14:21.587Z] 4098.12 IOPS, 16.01 MiB/s [2024-11-27T06:14:22.523Z] 4098.78 IOPS, 16.01 MiB/s [2024-11-27T06:14:22.523Z] 4137.90 IOPS, 16.16 MiB/s 00:19:17.426 Latency(us) 00:19:17.426 [2024-11-27T06:14:22.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.426 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:17.426 Verification LBA range: start 0x0 length 0x2000 00:19:17.426 TLSTESTn1 : 10.02 4144.11 16.19 0.00 0.00 30831.10 5451.40 24307.90 00:19:17.426 [2024-11-27T06:14:22.523Z] =================================================================================================================== 00:19:17.426 [2024-11-27T06:14:22.523Z] Total : 4144.11 16.19 0.00 0.00 30831.10 5451.40 24307.90 00:19:17.426 { 00:19:17.426 "results": [ 00:19:17.426 { 00:19:17.426 "job": "TLSTESTn1", 00:19:17.426 "core_mask": "0x4", 00:19:17.426 "workload": "verify", 00:19:17.426 "status": "finished", 00:19:17.426 "verify_range": { 00:19:17.426 "start": 0, 00:19:17.426 "length": 8192 00:19:17.426 }, 00:19:17.426 "queue_depth": 128, 00:19:17.426 "io_size": 4096, 00:19:17.426 "runtime": 10.015899, 00:19:17.426 "iops": 4144.111277479935, 00:19:17.426 "mibps": 16.187934677655996, 00:19:17.426 "io_failed": 0, 00:19:17.426 "io_timeout": 0, 00:19:17.426 "avg_latency_us": 30831.100132332554, 00:19:17.426 "min_latency_us": 5451.403636363636, 00:19:17.426 "max_latency_us": 24307.898181818182 00:19:17.426 } 00:19:17.426 ], 00:19:17.426 "core_count": 1 00:19:17.426 } 00:19:17.426 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:17.426 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:17.426 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:17.426 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:17.426 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:17.685 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:17.685 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:17.685 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:17.685 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:17.685 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:17.685 nvmf_trace.0 00:19:17.685 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:17.685 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73152 00:19:17.685 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 73152 ']' 00:19:17.685 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 73152 00:19:17.685 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:17.685 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.685 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73152 00:19:17.685 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:17.685 killing process with pid 73152 00:19:17.685 Received shutdown signal, test time was about 10.000000 seconds 00:19:17.685 00:19:17.685 Latency(us) 00:19:17.685 [2024-11-27T06:14:22.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.685 [2024-11-27T06:14:22.782Z] =================================================================================================================== 00:19:17.685 [2024-11-27T06:14:22.782Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:17.685 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:17.685 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73152' 00:19:17.685 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 73152 00:19:17.685 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 73152 00:19:17.945 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:17.945 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:17.945 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:17.945 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:17.945 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:17.945 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:17.945 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:17.945 rmmod nvme_tcp 00:19:17.945 rmmod nvme_fabrics 00:19:17.945 rmmod nvme_keyring 00:19:17.945 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:17.945 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:17.945 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:17.945 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 73105 ']' 00:19:17.945 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 73105 00:19:17.945 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 73105 ']' 00:19:17.945 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 73105 00:19:17.945 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:17.945 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.945 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73105 00:19:17.945 killing process with pid 73105 00:19:17.945 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:17.945 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:17.945 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73105' 00:19:17.945 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 73105 00:19:17.945 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 73105 00:19:18.204 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:18.204 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:18.204 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:18.204 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:18.204 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:18.204 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:18.204 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:18.204 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:18.204 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:18.204 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:18.204 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:18.204 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:18.204 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:18.204 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:18.204 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:18.204 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:18.204 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:18.204 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:18.464 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:18.464 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:18.464 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:18.464 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:18.464 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:18.464 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.464 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:18.464 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.464 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:19:18.464 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Mmh 00:19:18.464 ************************************ 00:19:18.464 END TEST nvmf_fips 00:19:18.464 ************************************ 00:19:18.464 00:19:18.464 real 0m15.272s 00:19:18.464 user 0m21.698s 00:19:18.464 sys 0m5.722s 00:19:18.464 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.464 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:18.464 06:14:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:18.464 06:14:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:18.464 06:14:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.464 06:14:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:18.464 ************************************ 00:19:18.464 START TEST nvmf_control_msg_list 00:19:18.464 ************************************ 00:19:18.464 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:18.728 * Looking for test storage... 00:19:18.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:18.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.728 --rc genhtml_branch_coverage=1 00:19:18.728 --rc genhtml_function_coverage=1 00:19:18.728 --rc genhtml_legend=1 00:19:18.728 --rc geninfo_all_blocks=1 00:19:18.728 --rc geninfo_unexecuted_blocks=1 00:19:18.728 00:19:18.728 ' 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:18.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.728 --rc genhtml_branch_coverage=1 00:19:18.728 --rc genhtml_function_coverage=1 00:19:18.728 --rc genhtml_legend=1 00:19:18.728 --rc geninfo_all_blocks=1 00:19:18.728 --rc geninfo_unexecuted_blocks=1 00:19:18.728 00:19:18.728 ' 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:18.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.728 --rc genhtml_branch_coverage=1 00:19:18.728 --rc genhtml_function_coverage=1 00:19:18.728 --rc genhtml_legend=1 00:19:18.728 --rc geninfo_all_blocks=1 00:19:18.728 --rc geninfo_unexecuted_blocks=1 00:19:18.728 00:19:18.728 ' 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:18.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.728 --rc genhtml_branch_coverage=1 00:19:18.728 --rc genhtml_function_coverage=1 00:19:18.728 --rc genhtml_legend=1 00:19:18.728 --rc geninfo_all_blocks=1 00:19:18.728 --rc geninfo_unexecuted_blocks=1 00:19:18.728 00:19:18.728 ' 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:18.728 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:18.729 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:18.729 Cannot find device "nvmf_init_br" 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:18.729 Cannot find device "nvmf_init_br2" 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:18.729 Cannot find device "nvmf_tgt_br" 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:18.729 Cannot find device "nvmf_tgt_br2" 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:18.729 Cannot find device "nvmf_init_br" 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:18.729 Cannot find device "nvmf_init_br2" 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:18.729 Cannot find device "nvmf_tgt_br" 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:19:18.729 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:18.992 Cannot find device "nvmf_tgt_br2" 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:18.992 Cannot find device "nvmf_br" 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:18.992 Cannot find device "nvmf_init_if" 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:18.992 Cannot find device "nvmf_init_if2" 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:18.992 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:18.992 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:18.992 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:18.992 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:18.992 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:18.992 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:18.992 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:18.992 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:18.992 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:18.992 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:18.992 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:18.992 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:18.992 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:19.250 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:19.250 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:19.250 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:19.250 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:19.250 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:19.250 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:19.250 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:19.250 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:19.250 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:19:19.250 00:19:19.250 --- 10.0.0.3 ping statistics --- 00:19:19.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.250 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:19:19.250 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:19.250 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:19.250 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:19:19.250 00:19:19.250 --- 10.0.0.4 ping statistics --- 00:19:19.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.250 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:19.250 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:19.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:19.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:19.250 00:19:19.250 --- 10.0.0.1 ping statistics --- 00:19:19.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.250 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:19.250 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:19.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:19.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:19:19.250 00:19:19.250 --- 10.0.0.2 ping statistics --- 00:19:19.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.250 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:19:19.250 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:19.250 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:19:19.250 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:19.250 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:19.250 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:19.250 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:19.250 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:19.250 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:19.250 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:19.250 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:19.250 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:19.250 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:19.250 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:19.251 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:19.251 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73536 00:19:19.251 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73536 00:19:19.251 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 73536 ']' 00:19:19.251 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.251 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.251 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.251 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.251 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:19.251 [2024-11-27 06:14:24.209739] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:19:19.251 [2024-11-27 06:14:24.209841] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.509 [2024-11-27 06:14:24.361961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.509 [2024-11-27 06:14:24.426233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.509 [2024-11-27 06:14:24.426304] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.509 [2024-11-27 06:14:24.426318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:19.509 [2024-11-27 06:14:24.426328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:19.509 [2024-11-27 06:14:24.426337] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.509 [2024-11-27 06:14:24.426803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.509 [2024-11-27 06:14:24.486115] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:19.509 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.509 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:19.509 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:19.509 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:19.509 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:19.509 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.509 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:19.509 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:19:19.509 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:19.509 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.509 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:19.767 [2024-11-27 06:14:24.604757] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:19.767 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.767 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:19.767 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.767 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:19.767 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.767 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:19.767 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.767 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:19.767 Malloc0 00:19:19.767 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.767 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:19.767 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.767 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:19.767 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.767 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:19.767 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.767 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:19.767 [2024-11-27 06:14:24.644350] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:19.767 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.767 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73562 00:19:19.767 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:19.767 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73563 00:19:19.767 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:19.767 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73564 00:19:19.767 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:19.767 06:14:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73562 00:19:19.767 [2024-11-27 06:14:24.832676] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:19.767 [2024-11-27 06:14:24.843293] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:19.767 [2024-11-27 06:14:24.843487] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:21.141 Initializing NVMe Controllers 00:19:21.141 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:19:21.141 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:21.141 Initialization complete. Launching workers. 00:19:21.141 ======================================================== 00:19:21.141 Latency(us) 00:19:21.141 Device Information : IOPS MiB/s Average min max 00:19:21.141 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3383.00 13.21 295.22 132.10 753.01 00:19:21.141 ======================================================== 00:19:21.141 Total : 3383.00 13.21 295.22 132.10 753.01 00:19:21.141 00:19:21.141 Initializing NVMe Controllers 00:19:21.141 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:19:21.141 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:21.141 Initialization complete. Launching workers. 00:19:21.141 ======================================================== 00:19:21.141 Latency(us) 00:19:21.141 Device Information : IOPS MiB/s Average min max 00:19:21.141 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3390.97 13.25 294.56 173.36 421.28 00:19:21.141 ======================================================== 00:19:21.141 Total : 3390.97 13.25 294.56 173.36 421.28 00:19:21.141 00:19:21.141 Initializing NVMe Controllers 00:19:21.141 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:19:21.141 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:21.141 Initialization complete. Launching workers. 00:19:21.141 ======================================================== 00:19:21.141 Latency(us) 00:19:21.141 Device Information : IOPS MiB/s Average min max 00:19:21.141 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3387.00 13.23 294.89 189.94 773.75 00:19:21.141 ======================================================== 00:19:21.141 Total : 3387.00 13.23 294.89 189.94 773.75 00:19:21.141 00:19:21.141 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73563 00:19:21.141 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73564 00:19:21.141 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:21.141 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:21.141 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:21.141 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:21.141 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:21.141 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:21.141 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:21.141 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:21.141 rmmod nvme_tcp 00:19:21.141 rmmod nvme_fabrics 00:19:21.141 rmmod nvme_keyring 00:19:21.141 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:21.141 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:21.141 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:21.141 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73536 ']' 00:19:21.141 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73536 00:19:21.141 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 73536 ']' 00:19:21.141 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 73536 00:19:21.141 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:21.141 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.141 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73536 00:19:21.141 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:21.141 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:21.141 killing process with pid 73536 00:19:21.141 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73536' 00:19:21.141 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 73536 00:19:21.141 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 73536 00:19:21.141 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:21.141 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:21.141 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:21.141 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:21.141 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:21.141 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:21.141 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:21.141 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:21.141 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:21.141 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:21.141 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:21.399 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:21.399 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:21.399 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:21.399 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:21.399 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:21.399 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:21.399 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:21.399 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:21.399 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:21.399 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:21.399 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:21.399 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:21.399 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.399 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:21.399 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.399 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:19:21.399 00:19:21.399 real 0m2.948s 00:19:21.399 user 0m4.851s 00:19:21.399 sys 0m1.328s 00:19:21.399 ************************************ 00:19:21.399 END TEST nvmf_control_msg_list 00:19:21.399 ************************************ 00:19:21.399 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.399 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:21.399 06:14:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:21.399 06:14:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:21.399 06:14:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.399 06:14:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:21.399 ************************************ 00:19:21.399 START TEST nvmf_wait_for_buf 00:19:21.399 ************************************ 00:19:21.399 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:21.659 * Looking for test storage... 00:19:21.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:21.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.659 --rc genhtml_branch_coverage=1 00:19:21.659 --rc genhtml_function_coverage=1 00:19:21.659 --rc genhtml_legend=1 00:19:21.659 --rc geninfo_all_blocks=1 00:19:21.659 --rc geninfo_unexecuted_blocks=1 00:19:21.659 00:19:21.659 ' 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:21.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.659 --rc genhtml_branch_coverage=1 00:19:21.659 --rc genhtml_function_coverage=1 00:19:21.659 --rc genhtml_legend=1 00:19:21.659 --rc geninfo_all_blocks=1 00:19:21.659 --rc geninfo_unexecuted_blocks=1 00:19:21.659 00:19:21.659 ' 00:19:21.659 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:21.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.659 --rc genhtml_branch_coverage=1 00:19:21.659 --rc genhtml_function_coverage=1 00:19:21.659 --rc genhtml_legend=1 00:19:21.659 --rc geninfo_all_blocks=1 00:19:21.659 --rc geninfo_unexecuted_blocks=1 00:19:21.659 00:19:21.659 ' 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:21.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.660 --rc genhtml_branch_coverage=1 00:19:21.660 --rc genhtml_function_coverage=1 00:19:21.660 --rc genhtml_legend=1 00:19:21.660 --rc geninfo_all_blocks=1 00:19:21.660 --rc geninfo_unexecuted_blocks=1 00:19:21.660 00:19:21.660 ' 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:21.660 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:21.660 Cannot find device "nvmf_init_br" 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:21.660 Cannot find device "nvmf_init_br2" 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:21.660 Cannot find device "nvmf_tgt_br" 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:21.660 Cannot find device "nvmf_tgt_br2" 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:19:21.660 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:21.919 Cannot find device "nvmf_init_br" 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:21.919 Cannot find device "nvmf_init_br2" 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:21.919 Cannot find device "nvmf_tgt_br" 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:21.919 Cannot find device "nvmf_tgt_br2" 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:21.919 Cannot find device "nvmf_br" 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:21.919 Cannot find device "nvmf_init_if" 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:21.919 Cannot find device "nvmf_init_if2" 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:21.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:21.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:21.919 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:21.919 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:21.919 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:22.177 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:22.177 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:22.177 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:22.177 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:22.177 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:22.177 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:22.177 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:22.177 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:22.177 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:22.177 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:22.177 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:22.177 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:19:22.177 00:19:22.177 --- 10.0.0.3 ping statistics --- 00:19:22.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.177 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:19:22.177 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:22.177 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:22.177 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:19:22.177 00:19:22.177 --- 10.0.0.4 ping statistics --- 00:19:22.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.177 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:19:22.177 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:22.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:22.177 00:19:22.177 --- 10.0.0.1 ping statistics --- 00:19:22.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.177 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:22.177 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:22.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:19:22.177 00:19:22.177 --- 10.0.0.2 ping statistics --- 00:19:22.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.177 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:19:22.177 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.177 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:19:22.177 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:22.178 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.178 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:22.178 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:22.178 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.178 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:22.178 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:22.178 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:22.178 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:22.178 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:22.178 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:22.178 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73799 00:19:22.178 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73799 00:19:22.178 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73799 ']' 00:19:22.178 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.178 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:22.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.178 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.178 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.178 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.178 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:22.178 [2024-11-27 06:14:27.176446] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:19:22.178 [2024-11-27 06:14:27.177117] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.436 [2024-11-27 06:14:27.326312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.436 [2024-11-27 06:14:27.384744] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.436 [2024-11-27 06:14:27.384800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.436 [2024-11-27 06:14:27.384811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.436 [2024-11-27 06:14:27.384821] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.436 [2024-11-27 06:14:27.384828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.436 [2024-11-27 06:14:27.385249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.436 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.437 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:22.437 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:22.437 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:22.437 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:22.437 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.437 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:22.437 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:19:22.437 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:22.437 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.437 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:22.437 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.437 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:22.437 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.437 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:22.437 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.437 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:22.437 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.437 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:22.694 [2024-11-27 06:14:27.536781] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:22.694 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.694 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:22.694 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.694 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:22.694 Malloc0 00:19:22.694 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.694 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:22.694 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.694 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:22.694 [2024-11-27 06:14:27.602224] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.694 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.694 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:22.694 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.694 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:22.694 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.694 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:22.694 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.694 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:22.694 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.694 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:22.694 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.694 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:22.694 [2024-11-27 06:14:27.626315] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:22.694 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.694 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:22.952 [2024-11-27 06:14:27.840244] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:24.335 Initializing NVMe Controllers 00:19:24.335 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:19:24.335 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:24.335 Initialization complete. Launching workers. 00:19:24.335 ======================================================== 00:19:24.335 Latency(us) 00:19:24.335 Device Information : IOPS MiB/s Average min max 00:19:24.335 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 482.09 60.26 8297.50 5933.01 16028.95 00:19:24.335 ======================================================== 00:19:24.335 Total : 482.09 60.26 8297.50 5933.01 16028.95 00:19:24.335 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4598 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4598 -eq 0 ]] 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:24.335 rmmod nvme_tcp 00:19:24.335 rmmod nvme_fabrics 00:19:24.335 rmmod nvme_keyring 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73799 ']' 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73799 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73799 ']' 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73799 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73799 00:19:24.335 killing process with pid 73799 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73799' 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73799 00:19:24.335 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73799 00:19:24.593 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:24.593 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:24.593 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:24.593 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:24.593 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:24.593 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:24.593 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:24.593 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:24.593 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:24.593 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:24.593 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:24.593 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:24.593 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:24.593 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:24.593 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:24.593 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:24.593 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:24.593 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:24.593 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:24.593 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:24.593 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:24.593 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:24.851 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:24.851 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.851 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:24.851 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.851 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:19:24.851 00:19:24.851 real 0m3.243s 00:19:24.851 user 0m2.592s 00:19:24.851 sys 0m0.794s 00:19:24.851 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.851 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:24.851 ************************************ 00:19:24.851 END TEST nvmf_wait_for_buf 00:19:24.851 ************************************ 00:19:24.851 06:14:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:24.851 06:14:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:19:24.851 06:14:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:19:24.851 06:14:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:24.851 06:14:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.851 06:14:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:24.851 ************************************ 00:19:24.851 START TEST nvmf_nsid 00:19:24.851 ************************************ 00:19:24.851 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:19:24.851 * Looking for test storage... 00:19:24.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:24.851 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:24.851 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:19:24.852 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:25.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.111 --rc genhtml_branch_coverage=1 00:19:25.111 --rc genhtml_function_coverage=1 00:19:25.111 --rc genhtml_legend=1 00:19:25.111 --rc geninfo_all_blocks=1 00:19:25.111 --rc geninfo_unexecuted_blocks=1 00:19:25.111 00:19:25.111 ' 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:25.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.111 --rc genhtml_branch_coverage=1 00:19:25.111 --rc genhtml_function_coverage=1 00:19:25.111 --rc genhtml_legend=1 00:19:25.111 --rc geninfo_all_blocks=1 00:19:25.111 --rc geninfo_unexecuted_blocks=1 00:19:25.111 00:19:25.111 ' 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:25.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.111 --rc genhtml_branch_coverage=1 00:19:25.111 --rc genhtml_function_coverage=1 00:19:25.111 --rc genhtml_legend=1 00:19:25.111 --rc geninfo_all_blocks=1 00:19:25.111 --rc geninfo_unexecuted_blocks=1 00:19:25.111 00:19:25.111 ' 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:25.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.111 --rc genhtml_branch_coverage=1 00:19:25.111 --rc genhtml_function_coverage=1 00:19:25.111 --rc genhtml_legend=1 00:19:25.111 --rc geninfo_all_blocks=1 00:19:25.111 --rc geninfo_unexecuted_blocks=1 00:19:25.111 00:19:25.111 ' 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.111 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:25.112 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:25.112 06:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:25.112 Cannot find device "nvmf_init_br" 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:25.112 Cannot find device "nvmf_init_br2" 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:25.112 Cannot find device "nvmf_tgt_br" 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:25.112 Cannot find device "nvmf_tgt_br2" 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:25.112 Cannot find device "nvmf_init_br" 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:25.112 Cannot find device "nvmf_init_br2" 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:25.112 Cannot find device "nvmf_tgt_br" 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:25.112 Cannot find device "nvmf_tgt_br2" 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:25.112 Cannot find device "nvmf_br" 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:25.112 Cannot find device "nvmf_init_if" 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:25.112 Cannot find device "nvmf_init_if2" 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:25.112 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:25.112 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:25.112 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:19:25.113 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:25.113 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:25.113 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:25.113 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:25.113 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:25.113 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:25.113 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:25.371 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:25.372 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:25.372 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:19:25.372 00:19:25.372 --- 10.0.0.3 ping statistics --- 00:19:25.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.372 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:25.372 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:25.372 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:19:25.372 00:19:25.372 --- 10.0.0.4 ping statistics --- 00:19:25.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.372 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:25.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:25.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:19:25.372 00:19:25.372 --- 10.0.0.1 ping statistics --- 00:19:25.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.372 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:25.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:25.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:19:25.372 00:19:25.372 --- 10.0.0.2 ping statistics --- 00:19:25.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.372 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=74061 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 74061 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 74061 ']' 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.372 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:25.372 [2024-11-27 06:14:30.466471] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:19:25.372 [2024-11-27 06:14:30.466557] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.632 [2024-11-27 06:14:30.611322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.632 [2024-11-27 06:14:30.673675] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.632 [2024-11-27 06:14:30.673726] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.632 [2024-11-27 06:14:30.673738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.632 [2024-11-27 06:14:30.673746] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.632 [2024-11-27 06:14:30.673754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.632 [2024-11-27 06:14:30.674275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.891 [2024-11-27 06:14:30.729106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=74086 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=0b6ef7b2-115d-4df3-bb80-e23969835097 00:19:25.891 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:19:25.892 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=9ae6a514-3f4c-467a-b676-ae22278c115e 00:19:25.892 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:19:25.892 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=4fd2f5e0-3457-4309-b080-135f15c05464 00:19:25.892 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:19:25.892 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.892 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:25.892 null0 00:19:25.892 null1 00:19:25.892 null2 00:19:25.892 [2024-11-27 06:14:30.895647] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.892 [2024-11-27 06:14:30.908929] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:19:25.892 [2024-11-27 06:14:30.909020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74086 ] 00:19:25.892 [2024-11-27 06:14:30.919776] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:25.892 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.892 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 74086 /var/tmp/tgt2.sock 00:19:25.892 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 74086 ']' 00:19:25.892 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:19:25.892 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:19:25.892 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:19:25.892 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.892 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:26.209 [2024-11-27 06:14:31.062468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.209 [2024-11-27 06:14:31.131959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.209 [2024-11-27 06:14:31.205981] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:26.470 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.470 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:19:26.470 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:19:27.039 [2024-11-27 06:14:31.851916] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.039 [2024-11-27 06:14:31.868009] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:19:27.039 nvme0n1 nvme0n2 00:19:27.039 nvme1n1 00:19:27.039 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:19:27.039 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:19:27.039 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid=34bde053-797d-42f4-ad97-2a3b315837d0 00:19:27.039 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:19:27.039 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:19:27.039 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:19:27.039 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:19:27.039 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:19:27.039 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:19:27.039 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:19:27.039 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:19:27.039 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:27.039 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:27.039 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:19:27.039 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:19:27.039 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 0b6ef7b2-115d-4df3-bb80-e23969835097 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0b6ef7b2115d4df3bb80e23969835097 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0B6EF7B2115D4DF3BB80E23969835097 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 0B6EF7B2115D4DF3BB80E23969835097 == \0\B\6\E\F\7\B\2\1\1\5\D\4\D\F\3\B\B\8\0\E\2\3\9\6\9\8\3\5\0\9\7 ]] 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 9ae6a514-3f4c-467a-b676-ae22278c115e 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=9ae6a5143f4c467ab676ae22278c115e 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 9AE6A5143F4C467AB676AE22278C115E 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 9AE6A5143F4C467AB676AE22278C115E == \9\A\E\6\A\5\1\4\3\F\4\C\4\6\7\A\B\6\7\6\A\E\2\2\2\7\8\C\1\1\5\E ]] 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 4fd2f5e0-3457-4309-b080-135f15c05464 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4fd2f5e034574309b080135f15c05464 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4FD2F5E034574309B080135F15C05464 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 4FD2F5E034574309B080135F15C05464 == \4\F\D\2\F\5\E\0\3\4\5\7\4\3\0\9\B\0\8\0\1\3\5\F\1\5\C\0\5\4\6\4 ]] 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 74086 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 74086 ']' 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 74086 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.417 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74086 00:19:28.676 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:28.676 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:28.676 killing process with pid 74086 00:19:28.676 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74086' 00:19:28.676 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 74086 00:19:28.676 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 74086 00:19:28.935 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:19:28.935 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:28.935 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:19:28.935 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:28.935 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:19:28.935 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:28.935 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:28.935 rmmod nvme_tcp 00:19:28.935 rmmod nvme_fabrics 00:19:28.935 rmmod nvme_keyring 00:19:28.935 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:28.935 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:19:28.935 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:19:28.935 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 74061 ']' 00:19:28.936 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 74061 00:19:28.936 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 74061 ']' 00:19:28.936 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 74061 00:19:28.936 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:19:28.936 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.936 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74061 00:19:29.194 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:29.194 killing process with pid 74061 00:19:29.194 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:29.194 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74061' 00:19:29.194 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 74061 00:19:29.194 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 74061 00:19:29.194 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:29.194 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:29.194 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:29.194 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:19:29.194 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:29.194 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:19:29.194 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:19:29.194 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:29.194 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:29.194 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:29.194 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:29.453 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:29.453 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:29.453 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:29.453 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:29.453 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:29.453 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:29.453 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:29.453 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:29.453 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:29.453 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:29.453 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:29.453 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:29.453 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.453 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:29.453 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.453 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:19:29.453 00:19:29.453 real 0m4.721s 00:19:29.453 user 0m7.052s 00:19:29.453 sys 0m1.677s 00:19:29.453 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:29.453 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:29.453 ************************************ 00:19:29.453 END TEST nvmf_nsid 00:19:29.453 ************************************ 00:19:29.713 06:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:19:29.713 00:19:29.714 real 5m3.897s 00:19:29.714 user 10m32.917s 00:19:29.714 sys 1m10.647s 00:19:29.714 06:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:29.714 ************************************ 00:19:29.714 END TEST nvmf_target_extra 00:19:29.714 06:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:29.714 ************************************ 00:19:29.714 06:14:34 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:29.714 06:14:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:29.714 06:14:34 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:29.714 06:14:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:29.714 ************************************ 00:19:29.714 START TEST nvmf_host 00:19:29.714 ************************************ 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:29.714 * Looking for test storage... 00:19:29.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:29.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.714 --rc genhtml_branch_coverage=1 00:19:29.714 --rc genhtml_function_coverage=1 00:19:29.714 --rc genhtml_legend=1 00:19:29.714 --rc geninfo_all_blocks=1 00:19:29.714 --rc geninfo_unexecuted_blocks=1 00:19:29.714 00:19:29.714 ' 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:29.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.714 --rc genhtml_branch_coverage=1 00:19:29.714 --rc genhtml_function_coverage=1 00:19:29.714 --rc genhtml_legend=1 00:19:29.714 --rc geninfo_all_blocks=1 00:19:29.714 --rc geninfo_unexecuted_blocks=1 00:19:29.714 00:19:29.714 ' 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:29.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.714 --rc genhtml_branch_coverage=1 00:19:29.714 --rc genhtml_function_coverage=1 00:19:29.714 --rc genhtml_legend=1 00:19:29.714 --rc geninfo_all_blocks=1 00:19:29.714 --rc geninfo_unexecuted_blocks=1 00:19:29.714 00:19:29.714 ' 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:29.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.714 --rc genhtml_branch_coverage=1 00:19:29.714 --rc genhtml_function_coverage=1 00:19:29.714 --rc genhtml_legend=1 00:19:29.714 --rc geninfo_all_blocks=1 00:19:29.714 --rc geninfo_unexecuted_blocks=1 00:19:29.714 00:19:29.714 ' 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:29.714 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:29.715 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:29.975 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.975 ************************************ 00:19:29.975 START TEST nvmf_identify 00:19:29.975 ************************************ 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:29.975 * Looking for test storage... 00:19:29.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:19:29.975 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:29.976 06:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:29.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.976 --rc genhtml_branch_coverage=1 00:19:29.976 --rc genhtml_function_coverage=1 00:19:29.976 --rc genhtml_legend=1 00:19:29.976 --rc geninfo_all_blocks=1 00:19:29.976 --rc geninfo_unexecuted_blocks=1 00:19:29.976 00:19:29.976 ' 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:29.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.976 --rc genhtml_branch_coverage=1 00:19:29.976 --rc genhtml_function_coverage=1 00:19:29.976 --rc genhtml_legend=1 00:19:29.976 --rc geninfo_all_blocks=1 00:19:29.976 --rc geninfo_unexecuted_blocks=1 00:19:29.976 00:19:29.976 ' 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:29.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.976 --rc genhtml_branch_coverage=1 00:19:29.976 --rc genhtml_function_coverage=1 00:19:29.976 --rc genhtml_legend=1 00:19:29.976 --rc geninfo_all_blocks=1 00:19:29.976 --rc geninfo_unexecuted_blocks=1 00:19:29.976 00:19:29.976 ' 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:29.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.976 --rc genhtml_branch_coverage=1 00:19:29.976 --rc genhtml_function_coverage=1 00:19:29.976 --rc genhtml_legend=1 00:19:29.976 --rc geninfo_all_blocks=1 00:19:29.976 --rc geninfo_unexecuted_blocks=1 00:19:29.976 00:19:29.976 ' 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:29.976 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.976 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:29.977 Cannot find device "nvmf_init_br" 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:29.977 Cannot find device "nvmf_init_br2" 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:19:29.977 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:30.236 Cannot find device "nvmf_tgt_br" 00:19:30.236 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:19:30.236 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:30.236 Cannot find device "nvmf_tgt_br2" 00:19:30.236 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:19:30.236 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:30.236 Cannot find device "nvmf_init_br" 00:19:30.236 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:19:30.236 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:30.236 Cannot find device "nvmf_init_br2" 00:19:30.236 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:19:30.236 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:30.236 Cannot find device "nvmf_tgt_br" 00:19:30.236 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:19:30.236 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:30.236 Cannot find device "nvmf_tgt_br2" 00:19:30.236 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:19:30.236 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:30.236 Cannot find device "nvmf_br" 00:19:30.236 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:30.237 Cannot find device "nvmf_init_if" 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:30.237 Cannot find device "nvmf_init_if2" 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:30.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:30.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:30.237 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:30.496 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:30.496 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:30.496 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:30.496 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:30.496 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:30.496 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:30.496 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:30.496 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:30.496 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:30.496 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:30.496 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.120 ms 00:19:30.496 00:19:30.496 --- 10.0.0.3 ping statistics --- 00:19:30.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.496 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:19:30.496 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:30.496 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:30.496 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:19:30.496 00:19:30.496 --- 10.0.0.4 ping statistics --- 00:19:30.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.496 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:30.496 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:30.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:19:30.496 00:19:30.496 --- 10.0.0.1 ping statistics --- 00:19:30.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.496 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:19:30.496 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:30.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:19:30.496 00:19:30.496 --- 10.0.0.2 ping statistics --- 00:19:30.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.496 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:30.496 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.496 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:19:30.496 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:30.496 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.496 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:30.496 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:30.496 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.497 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:30.497 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:30.497 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:30.497 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:30.497 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:30.497 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74439 00:19:30.497 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:30.497 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:30.497 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74439 00:19:30.497 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 74439 ']' 00:19:30.497 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.497 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.497 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.497 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.497 06:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:30.497 [2024-11-27 06:14:35.488849] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:19:30.497 [2024-11-27 06:14:35.488968] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.756 [2024-11-27 06:14:35.645080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:30.756 [2024-11-27 06:14:35.711196] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.756 [2024-11-27 06:14:35.711268] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.756 [2024-11-27 06:14:35.711295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.756 [2024-11-27 06:14:35.711306] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.756 [2024-11-27 06:14:35.711315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.756 [2024-11-27 06:14:35.712630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.756 [2024-11-27 06:14:35.712768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.756 [2024-11-27 06:14:35.712875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:30.756 [2024-11-27 06:14:35.712879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.756 [2024-11-27 06:14:35.775186] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:31.691 [2024-11-27 06:14:36.496204] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:31.691 Malloc0 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:31.691 [2024-11-27 06:14:36.592933] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.691 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:31.691 [ 00:19:31.691 { 00:19:31.691 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:31.691 "subtype": "Discovery", 00:19:31.691 "listen_addresses": [ 00:19:31.691 { 00:19:31.691 "trtype": "TCP", 00:19:31.691 "adrfam": "IPv4", 00:19:31.691 "traddr": "10.0.0.3", 00:19:31.691 "trsvcid": "4420" 00:19:31.691 } 00:19:31.691 ], 00:19:31.691 "allow_any_host": true, 00:19:31.691 "hosts": [] 00:19:31.691 }, 00:19:31.691 { 00:19:31.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.692 "subtype": "NVMe", 00:19:31.692 "listen_addresses": [ 00:19:31.692 { 00:19:31.692 "trtype": "TCP", 00:19:31.692 "adrfam": "IPv4", 00:19:31.692 "traddr": "10.0.0.3", 00:19:31.692 "trsvcid": "4420" 00:19:31.692 } 00:19:31.692 ], 00:19:31.692 "allow_any_host": true, 00:19:31.692 "hosts": [], 00:19:31.692 "serial_number": "SPDK00000000000001", 00:19:31.692 "model_number": "SPDK bdev Controller", 00:19:31.692 "max_namespaces": 32, 00:19:31.692 "min_cntlid": 1, 00:19:31.692 "max_cntlid": 65519, 00:19:31.692 "namespaces": [ 00:19:31.692 { 00:19:31.692 "nsid": 1, 00:19:31.692 "bdev_name": "Malloc0", 00:19:31.692 "name": "Malloc0", 00:19:31.692 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:31.692 "eui64": "ABCDEF0123456789", 00:19:31.692 "uuid": "b61b7aa0-8be1-450a-a012-eb411738da6a" 00:19:31.692 } 00:19:31.692 ] 00:19:31.692 } 00:19:31.692 ] 00:19:31.692 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.692 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:31.692 [2024-11-27 06:14:36.644705] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:19:31.692 [2024-11-27 06:14:36.644761] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74474 ] 00:19:31.953 [2024-11-27 06:14:36.809607] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:19:31.953 [2024-11-27 06:14:36.809686] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:31.953 [2024-11-27 06:14:36.809695] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:31.953 [2024-11-27 06:14:36.809716] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:31.953 [2024-11-27 06:14:36.809732] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:31.953 [2024-11-27 06:14:36.810157] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:19:31.953 [2024-11-27 06:14:36.810246] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x716750 0 00:19:31.953 [2024-11-27 06:14:36.816152] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:31.953 [2024-11-27 06:14:36.816181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:31.953 [2024-11-27 06:14:36.816188] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:31.953 [2024-11-27 06:14:36.816192] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:31.953 [2024-11-27 06:14:36.816232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.953 [2024-11-27 06:14:36.816240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.953 [2024-11-27 06:14:36.816245] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x716750) 00:19:31.953 [2024-11-27 06:14:36.816259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:31.953 [2024-11-27 06:14:36.816294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77a740, cid 0, qid 0 00:19:31.953 [2024-11-27 06:14:36.823189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.953 [2024-11-27 06:14:36.823215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.953 [2024-11-27 06:14:36.823221] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.953 [2024-11-27 06:14:36.823227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77a740) on tqpair=0x716750 00:19:31.953 [2024-11-27 06:14:36.823242] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:31.953 [2024-11-27 06:14:36.823252] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:19:31.953 [2024-11-27 06:14:36.823260] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:19:31.953 [2024-11-27 06:14:36.823280] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.953 [2024-11-27 06:14:36.823288] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.953 [2024-11-27 06:14:36.823295] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x716750) 00:19:31.953 [2024-11-27 06:14:36.823308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.953 [2024-11-27 06:14:36.823345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77a740, cid 0, qid 0 00:19:31.953 [2024-11-27 06:14:36.823423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.953 [2024-11-27 06:14:36.823434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.953 [2024-11-27 06:14:36.823439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.953 [2024-11-27 06:14:36.823458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77a740) on tqpair=0x716750 00:19:31.953 [2024-11-27 06:14:36.823466] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:19:31.953 [2024-11-27 06:14:36.823477] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:19:31.953 [2024-11-27 06:14:36.823489] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.953 [2024-11-27 06:14:36.823495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.953 [2024-11-27 06:14:36.823501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x716750) 00:19:31.953 [2024-11-27 06:14:36.823512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.953 [2024-11-27 06:14:36.823538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77a740, cid 0, qid 0 00:19:31.953 [2024-11-27 06:14:36.823584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.953 [2024-11-27 06:14:36.823594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.953 [2024-11-27 06:14:36.823599] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.953 [2024-11-27 06:14:36.823605] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77a740) on tqpair=0x716750 00:19:31.953 [2024-11-27 06:14:36.823614] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:19:31.953 [2024-11-27 06:14:36.823625] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:19:31.953 [2024-11-27 06:14:36.823636] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.953 [2024-11-27 06:14:36.823642] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.953 [2024-11-27 06:14:36.823648] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x716750) 00:19:31.953 [2024-11-27 06:14:36.823658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.953 [2024-11-27 06:14:36.823683] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77a740, cid 0, qid 0 00:19:31.953 [2024-11-27 06:14:36.823733] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.953 [2024-11-27 06:14:36.823742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.953 [2024-11-27 06:14:36.823748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.953 [2024-11-27 06:14:36.823754] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77a740) on tqpair=0x716750 00:19:31.953 [2024-11-27 06:14:36.823762] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:31.953 [2024-11-27 06:14:36.823776] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.953 [2024-11-27 06:14:36.823783] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.953 [2024-11-27 06:14:36.823788] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x716750) 00:19:31.953 [2024-11-27 06:14:36.823799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.953 [2024-11-27 06:14:36.823822] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77a740, cid 0, qid 0 00:19:31.953 [2024-11-27 06:14:36.823867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.953 [2024-11-27 06:14:36.823876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.953 [2024-11-27 06:14:36.823882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.953 [2024-11-27 06:14:36.823888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77a740) on tqpair=0x716750 00:19:31.953 [2024-11-27 06:14:36.823895] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:19:31.953 [2024-11-27 06:14:36.823903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:19:31.953 [2024-11-27 06:14:36.823914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:31.953 [2024-11-27 06:14:36.824028] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:19:31.953 [2024-11-27 06:14:36.824036] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:31.953 [2024-11-27 06:14:36.824049] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.953 [2024-11-27 06:14:36.824056] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.953 [2024-11-27 06:14:36.824062] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x716750) 00:19:31.953 [2024-11-27 06:14:36.824072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.953 [2024-11-27 06:14:36.824097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77a740, cid 0, qid 0 00:19:31.953 [2024-11-27 06:14:36.824168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.953 [2024-11-27 06:14:36.824180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.954 [2024-11-27 06:14:36.824187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.824194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77a740) on tqpair=0x716750 00:19:31.954 [2024-11-27 06:14:36.824203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:31.954 [2024-11-27 06:14:36.824218] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.824223] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.824228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x716750) 00:19:31.954 [2024-11-27 06:14:36.824236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.954 [2024-11-27 06:14:36.824259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77a740, cid 0, qid 0 00:19:31.954 [2024-11-27 06:14:36.824327] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.954 [2024-11-27 06:14:36.824334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.954 [2024-11-27 06:14:36.824338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.824343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77a740) on tqpair=0x716750 00:19:31.954 [2024-11-27 06:14:36.824348] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:31.954 [2024-11-27 06:14:36.824354] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:19:31.954 [2024-11-27 06:14:36.824362] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:19:31.954 [2024-11-27 06:14:36.824378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:19:31.954 [2024-11-27 06:14:36.824391] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.824395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x716750) 00:19:31.954 [2024-11-27 06:14:36.824404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.954 [2024-11-27 06:14:36.824423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77a740, cid 0, qid 0 00:19:31.954 [2024-11-27 06:14:36.824531] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:31.954 [2024-11-27 06:14:36.824553] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:31.954 [2024-11-27 06:14:36.824559] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.824566] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x716750): datao=0, datal=4096, cccid=0 00:19:31.954 [2024-11-27 06:14:36.824573] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x77a740) on tqpair(0x716750): expected_datao=0, payload_size=4096 00:19:31.954 [2024-11-27 06:14:36.824580] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.824592] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.824599] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.824611] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.954 [2024-11-27 06:14:36.824620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.954 [2024-11-27 06:14:36.824633] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.824639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77a740) on tqpair=0x716750 00:19:31.954 [2024-11-27 06:14:36.824652] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:19:31.954 [2024-11-27 06:14:36.824660] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:19:31.954 [2024-11-27 06:14:36.824666] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:19:31.954 [2024-11-27 06:14:36.824687] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:19:31.954 [2024-11-27 06:14:36.824694] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:19:31.954 [2024-11-27 06:14:36.824702] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:19:31.954 [2024-11-27 06:14:36.824714] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:19:31.954 [2024-11-27 06:14:36.824725] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.824732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.824738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x716750) 00:19:31.954 [2024-11-27 06:14:36.824749] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:31.954 [2024-11-27 06:14:36.824777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77a740, cid 0, qid 0 00:19:31.954 [2024-11-27 06:14:36.824844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.954 [2024-11-27 06:14:36.824854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.954 [2024-11-27 06:14:36.824860] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.824866] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77a740) on tqpair=0x716750 00:19:31.954 [2024-11-27 06:14:36.824878] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.824884] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.824890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x716750) 00:19:31.954 [2024-11-27 06:14:36.824900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.954 [2024-11-27 06:14:36.824908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.824914] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.824920] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x716750) 00:19:31.954 [2024-11-27 06:14:36.824928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.954 [2024-11-27 06:14:36.824936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.824942] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.824948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x716750) 00:19:31.954 [2024-11-27 06:14:36.824956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.954 [2024-11-27 06:14:36.824965] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.824970] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.824976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x716750) 00:19:31.954 [2024-11-27 06:14:36.824984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.954 [2024-11-27 06:14:36.824992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:31.954 [2024-11-27 06:14:36.825003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:31.954 [2024-11-27 06:14:36.825013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.825019] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x716750) 00:19:31.954 [2024-11-27 06:14:36.825029] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.954 [2024-11-27 06:14:36.825065] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77a740, cid 0, qid 0 00:19:31.954 [2024-11-27 06:14:36.825075] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77a8c0, cid 1, qid 0 00:19:31.954 [2024-11-27 06:14:36.825082] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77aa40, cid 2, qid 0 00:19:31.954 [2024-11-27 06:14:36.825089] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77abc0, cid 3, qid 0 00:19:31.954 [2024-11-27 06:14:36.825096] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77ad40, cid 4, qid 0 00:19:31.954 [2024-11-27 06:14:36.825208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.954 [2024-11-27 06:14:36.825227] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.954 [2024-11-27 06:14:36.825234] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.825240] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77ad40) on tqpair=0x716750 00:19:31.954 [2024-11-27 06:14:36.825249] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:19:31.954 [2024-11-27 06:14:36.825257] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:19:31.954 [2024-11-27 06:14:36.825273] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.825279] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x716750) 00:19:31.954 [2024-11-27 06:14:36.825290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.954 [2024-11-27 06:14:36.825318] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77ad40, cid 4, qid 0 00:19:31.954 [2024-11-27 06:14:36.825389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:31.954 [2024-11-27 06:14:36.825407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:31.954 [2024-11-27 06:14:36.825412] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.825419] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x716750): datao=0, datal=4096, cccid=4 00:19:31.954 [2024-11-27 06:14:36.825426] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x77ad40) on tqpair(0x716750): expected_datao=0, payload_size=4096 00:19:31.954 [2024-11-27 06:14:36.825433] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.825444] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.825450] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.825463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.954 [2024-11-27 06:14:36.825472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.954 [2024-11-27 06:14:36.825477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.825484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77ad40) on tqpair=0x716750 00:19:31.954 [2024-11-27 06:14:36.825505] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:19:31.954 [2024-11-27 06:14:36.825542] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.954 [2024-11-27 06:14:36.825556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x716750) 00:19:31.955 [2024-11-27 06:14:36.825567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.955 [2024-11-27 06:14:36.825578] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.955 [2024-11-27 06:14:36.825584] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.955 [2024-11-27 06:14:36.825590] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x716750) 00:19:31.955 [2024-11-27 06:14:36.825598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.955 [2024-11-27 06:14:36.825634] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77ad40, cid 4, qid 0 00:19:31.955 [2024-11-27 06:14:36.825646] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77aec0, cid 5, qid 0 00:19:31.955 [2024-11-27 06:14:36.825769] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:31.955 [2024-11-27 06:14:36.825779] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:31.955 [2024-11-27 06:14:36.825785] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:31.955 [2024-11-27 06:14:36.825790] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x716750): datao=0, datal=1024, cccid=4 00:19:31.955 [2024-11-27 06:14:36.825797] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x77ad40) on tqpair(0x716750): expected_datao=0, payload_size=1024 00:19:31.955 [2024-11-27 06:14:36.825804] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.955 [2024-11-27 06:14:36.825813] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:31.955 [2024-11-27 06:14:36.825819] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:31.955 [2024-11-27 06:14:36.825827] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.955 [2024-11-27 06:14:36.825836] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.955 [2024-11-27 06:14:36.825841] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.955 [2024-11-27 06:14:36.825847] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77aec0) on tqpair=0x716750 00:19:31.955 [2024-11-27 06:14:36.825874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.955 [2024-11-27 06:14:36.825886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.955 [2024-11-27 06:14:36.825891] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.955 [2024-11-27 06:14:36.825897] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77ad40) on tqpair=0x716750 00:19:31.955 [2024-11-27 06:14:36.825936] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.955 [2024-11-27 06:14:36.825949] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x716750) 00:19:31.955 [2024-11-27 06:14:36.825960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.955 [2024-11-27 06:14:36.826000] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77ad40, cid 4, qid 0 00:19:31.955 [2024-11-27 06:14:36.826093] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:31.955 [2024-11-27 06:14:36.826109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:31.955 [2024-11-27 06:14:36.826115] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:31.955 [2024-11-27 06:14:36.826121] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x716750): datao=0, datal=3072, cccid=4 00:19:31.955 [2024-11-27 06:14:36.826144] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x77ad40) on tqpair(0x716750): expected_datao=0, payload_size=3072 00:19:31.955 [2024-11-27 06:14:36.826152] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.955 [2024-11-27 06:14:36.826163] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:31.955 [2024-11-27 06:14:36.826169] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:31.955 [2024-11-27 06:14:36.826191] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.955 [2024-11-27 06:14:36.826202] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.955 [2024-11-27 06:14:36.826208] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.955 [2024-11-27 06:14:36.826214] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77ad40) on tqpair=0x716750 00:19:31.955 [2024-11-27 06:14:36.826229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.955 [2024-11-27 06:14:36.826236] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x716750) 00:19:31.955 [2024-11-27 06:14:36.826246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.955 [2024-11-27 06:14:36.826283] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77ad40, cid 4, qid 0 00:19:31.955 [2024-11-27 06:14:36.826363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:31.955 [2024-11-27 06:14:36.826372] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:31.955 [2024-11-27 06:14:36.826376] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:31.955 [2024-11-27 06:14:36.826380] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x716750): datao=0, datal=8, cccid=4 00:19:31.955 [2024-11-27 06:14:36.826385] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x77ad40) on tqpair(0x716750): expected_datao=0, payload_size=8 00:19:31.955 [2024-11-27 06:14:36.826389] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.955 [2024-11-27 06:14:36.826397] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:31.955 [2024-11-27 06:14:36.826401] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:31.955 [2024-11-27 06:14:36.826420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.955 [2024-11-27 06:14:36.826428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.955 [2024-11-27 06:14:36.826432] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.955 [2024-11-27 06:14:36.826436] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77ad40) on tqpair=0x716750 00:19:31.955 ===================================================== 00:19:31.955 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:31.955 ===================================================== 00:19:31.955 Controller Capabilities/Features 00:19:31.955 ================================ 00:19:31.955 Vendor ID: 0000 00:19:31.955 Subsystem Vendor ID: 0000 00:19:31.955 Serial Number: .................... 00:19:31.955 Model Number: ........................................ 00:19:31.955 Firmware Version: 25.01 00:19:31.955 Recommended Arb Burst: 0 00:19:31.955 IEEE OUI Identifier: 00 00 00 00:19:31.955 Multi-path I/O 00:19:31.955 May have multiple subsystem ports: No 00:19:31.955 May have multiple controllers: No 00:19:31.955 Associated with SR-IOV VF: No 00:19:31.955 Max Data Transfer Size: 131072 00:19:31.955 Max Number of Namespaces: 0 00:19:31.955 Max Number of I/O Queues: 1024 00:19:31.955 NVMe Specification Version (VS): 1.3 00:19:31.955 NVMe Specification Version (Identify): 1.3 00:19:31.955 Maximum Queue Entries: 128 00:19:31.955 Contiguous Queues Required: Yes 00:19:31.955 Arbitration Mechanisms Supported 00:19:31.955 Weighted Round Robin: Not Supported 00:19:31.955 Vendor Specific: Not Supported 00:19:31.955 Reset Timeout: 15000 ms 00:19:31.955 Doorbell Stride: 4 bytes 00:19:31.955 NVM Subsystem Reset: Not Supported 00:19:31.955 Command Sets Supported 00:19:31.955 NVM Command Set: Supported 00:19:31.955 Boot Partition: Not Supported 00:19:31.955 Memory Page Size Minimum: 4096 bytes 00:19:31.955 Memory Page Size Maximum: 4096 bytes 00:19:31.955 Persistent Memory Region: Not Supported 00:19:31.955 Optional Asynchronous Events Supported 00:19:31.955 Namespace Attribute Notices: Not Supported 00:19:31.955 Firmware Activation Notices: Not Supported 00:19:31.955 ANA Change Notices: Not Supported 00:19:31.955 PLE Aggregate Log Change Notices: Not Supported 00:19:31.955 LBA Status Info Alert Notices: Not Supported 00:19:31.955 EGE Aggregate Log Change Notices: Not Supported 00:19:31.955 Normal NVM Subsystem Shutdown event: Not Supported 00:19:31.955 Zone Descriptor Change Notices: Not Supported 00:19:31.955 Discovery Log Change Notices: Supported 00:19:31.955 Controller Attributes 00:19:31.955 128-bit Host Identifier: Not Supported 00:19:31.955 Non-Operational Permissive Mode: Not Supported 00:19:31.955 NVM Sets: Not Supported 00:19:31.955 Read Recovery Levels: Not Supported 00:19:31.955 Endurance Groups: Not Supported 00:19:31.955 Predictable Latency Mode: Not Supported 00:19:31.955 Traffic Based Keep ALive: Not Supported 00:19:31.955 Namespace Granularity: Not Supported 00:19:31.955 SQ Associations: Not Supported 00:19:31.955 UUID List: Not Supported 00:19:31.955 Multi-Domain Subsystem: Not Supported 00:19:31.955 Fixed Capacity Management: Not Supported 00:19:31.955 Variable Capacity Management: Not Supported 00:19:31.955 Delete Endurance Group: Not Supported 00:19:31.955 Delete NVM Set: Not Supported 00:19:31.955 Extended LBA Formats Supported: Not Supported 00:19:31.955 Flexible Data Placement Supported: Not Supported 00:19:31.955 00:19:31.955 Controller Memory Buffer Support 00:19:31.955 ================================ 00:19:31.955 Supported: No 00:19:31.955 00:19:31.955 Persistent Memory Region Support 00:19:31.955 ================================ 00:19:31.955 Supported: No 00:19:31.955 00:19:31.955 Admin Command Set Attributes 00:19:31.955 ============================ 00:19:31.955 Security Send/Receive: Not Supported 00:19:31.955 Format NVM: Not Supported 00:19:31.955 Firmware Activate/Download: Not Supported 00:19:31.955 Namespace Management: Not Supported 00:19:31.955 Device Self-Test: Not Supported 00:19:31.955 Directives: Not Supported 00:19:31.955 NVMe-MI: Not Supported 00:19:31.955 Virtualization Management: Not Supported 00:19:31.955 Doorbell Buffer Config: Not Supported 00:19:31.955 Get LBA Status Capability: Not Supported 00:19:31.955 Command & Feature Lockdown Capability: Not Supported 00:19:31.955 Abort Command Limit: 1 00:19:31.955 Async Event Request Limit: 4 00:19:31.955 Number of Firmware Slots: N/A 00:19:31.955 Firmware Slot 1 Read-Only: N/A 00:19:31.955 Firmware Activation Without Reset: N/A 00:19:31.955 Multiple Update Detection Support: N/A 00:19:31.956 Firmware Update Granularity: No Information Provided 00:19:31.956 Per-Namespace SMART Log: No 00:19:31.956 Asymmetric Namespace Access Log Page: Not Supported 00:19:31.956 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:31.956 Command Effects Log Page: Not Supported 00:19:31.956 Get Log Page Extended Data: Supported 00:19:31.956 Telemetry Log Pages: Not Supported 00:19:31.956 Persistent Event Log Pages: Not Supported 00:19:31.956 Supported Log Pages Log Page: May Support 00:19:31.956 Commands Supported & Effects Log Page: Not Supported 00:19:31.956 Feature Identifiers & Effects Log Page:May Support 00:19:31.956 NVMe-MI Commands & Effects Log Page: May Support 00:19:31.956 Data Area 4 for Telemetry Log: Not Supported 00:19:31.956 Error Log Page Entries Supported: 128 00:19:31.956 Keep Alive: Not Supported 00:19:31.956 00:19:31.956 NVM Command Set Attributes 00:19:31.956 ========================== 00:19:31.956 Submission Queue Entry Size 00:19:31.956 Max: 1 00:19:31.956 Min: 1 00:19:31.956 Completion Queue Entry Size 00:19:31.956 Max: 1 00:19:31.956 Min: 1 00:19:31.956 Number of Namespaces: 0 00:19:31.956 Compare Command: Not Supported 00:19:31.956 Write Uncorrectable Command: Not Supported 00:19:31.956 Dataset Management Command: Not Supported 00:19:31.956 Write Zeroes Command: Not Supported 00:19:31.956 Set Features Save Field: Not Supported 00:19:31.956 Reservations: Not Supported 00:19:31.956 Timestamp: Not Supported 00:19:31.956 Copy: Not Supported 00:19:31.956 Volatile Write Cache: Not Present 00:19:31.956 Atomic Write Unit (Normal): 1 00:19:31.956 Atomic Write Unit (PFail): 1 00:19:31.956 Atomic Compare & Write Unit: 1 00:19:31.956 Fused Compare & Write: Supported 00:19:31.956 Scatter-Gather List 00:19:31.956 SGL Command Set: Supported 00:19:31.956 SGL Keyed: Supported 00:19:31.956 SGL Bit Bucket Descriptor: Not Supported 00:19:31.956 SGL Metadata Pointer: Not Supported 00:19:31.956 Oversized SGL: Not Supported 00:19:31.956 SGL Metadata Address: Not Supported 00:19:31.956 SGL Offset: Supported 00:19:31.956 Transport SGL Data Block: Not Supported 00:19:31.956 Replay Protected Memory Block: Not Supported 00:19:31.956 00:19:31.956 Firmware Slot Information 00:19:31.956 ========================= 00:19:31.956 Active slot: 0 00:19:31.956 00:19:31.956 00:19:31.956 Error Log 00:19:31.956 ========= 00:19:31.956 00:19:31.956 Active Namespaces 00:19:31.956 ================= 00:19:31.956 Discovery Log Page 00:19:31.956 ================== 00:19:31.956 Generation Counter: 2 00:19:31.956 Number of Records: 2 00:19:31.956 Record Format: 0 00:19:31.956 00:19:31.956 Discovery Log Entry 0 00:19:31.956 ---------------------- 00:19:31.956 Transport Type: 3 (TCP) 00:19:31.956 Address Family: 1 (IPv4) 00:19:31.956 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:31.956 Entry Flags: 00:19:31.956 Duplicate Returned Information: 1 00:19:31.956 Explicit Persistent Connection Support for Discovery: 1 00:19:31.956 Transport Requirements: 00:19:31.956 Secure Channel: Not Required 00:19:31.956 Port ID: 0 (0x0000) 00:19:31.956 Controller ID: 65535 (0xffff) 00:19:31.956 Admin Max SQ Size: 128 00:19:31.956 Transport Service Identifier: 4420 00:19:31.956 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:31.956 Transport Address: 10.0.0.3 00:19:31.956 Discovery Log Entry 1 00:19:31.956 ---------------------- 00:19:31.956 Transport Type: 3 (TCP) 00:19:31.956 Address Family: 1 (IPv4) 00:19:31.956 Subsystem Type: 2 (NVM Subsystem) 00:19:31.956 Entry Flags: 00:19:31.956 Duplicate Returned Information: 0 00:19:31.956 Explicit Persistent Connection Support for Discovery: 0 00:19:31.956 Transport Requirements: 00:19:31.956 Secure Channel: Not Required 00:19:31.956 Port ID: 0 (0x0000) 00:19:31.956 Controller ID: 65535 (0xffff) 00:19:31.956 Admin Max SQ Size: 128 00:19:31.956 Transport Service Identifier: 4420 00:19:31.956 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:31.956 Transport Address: 10.0.0.3 [2024-11-27 06:14:36.826549] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:19:31.956 [2024-11-27 06:14:36.826567] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77a740) on tqpair=0x716750 00:19:31.956 [2024-11-27 06:14:36.826575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.956 [2024-11-27 06:14:36.826582] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77a8c0) on tqpair=0x716750 00:19:31.956 [2024-11-27 06:14:36.826587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.956 [2024-11-27 06:14:36.826592] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77aa40) on tqpair=0x716750 00:19:31.956 [2024-11-27 06:14:36.826597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.956 [2024-11-27 06:14:36.826602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77abc0) on tqpair=0x716750 00:19:31.956 [2024-11-27 06:14:36.826607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.956 [2024-11-27 06:14:36.826636] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.956 [2024-11-27 06:14:36.826645] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.956 [2024-11-27 06:14:36.826649] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x716750) 00:19:31.956 [2024-11-27 06:14:36.826659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.956 [2024-11-27 06:14:36.826688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77abc0, cid 3, qid 0 00:19:31.956 [2024-11-27 06:14:36.826752] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.956 [2024-11-27 06:14:36.826764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.956 [2024-11-27 06:14:36.826770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.956 [2024-11-27 06:14:36.826776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77abc0) on tqpair=0x716750 00:19:31.956 [2024-11-27 06:14:36.826787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.956 [2024-11-27 06:14:36.826794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.956 [2024-11-27 06:14:36.826799] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x716750) 00:19:31.956 [2024-11-27 06:14:36.826810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.956 [2024-11-27 06:14:36.826841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77abc0, cid 3, qid 0 00:19:31.956 [2024-11-27 06:14:36.826921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.956 [2024-11-27 06:14:36.826942] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.956 [2024-11-27 06:14:36.826948] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.956 [2024-11-27 06:14:36.826955] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77abc0) on tqpair=0x716750 00:19:31.956 [2024-11-27 06:14:36.826963] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:19:31.956 [2024-11-27 06:14:36.826970] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:19:31.956 [2024-11-27 06:14:36.826985] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.956 [2024-11-27 06:14:36.826992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.956 [2024-11-27 06:14:36.826997] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x716750) 00:19:31.956 [2024-11-27 06:14:36.827008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.956 [2024-11-27 06:14:36.827035] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77abc0, cid 3, qid 0 00:19:31.956 [2024-11-27 06:14:36.827086] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.956 [2024-11-27 06:14:36.827099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.956 [2024-11-27 06:14:36.827104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.956 [2024-11-27 06:14:36.827108] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77abc0) on tqpair=0x716750 00:19:31.956 [2024-11-27 06:14:36.827121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.956 [2024-11-27 06:14:36.831153] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.956 [2024-11-27 06:14:36.831165] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x716750) 00:19:31.956 [2024-11-27 06:14:36.831175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.956 [2024-11-27 06:14:36.831206] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77abc0, cid 3, qid 0 00:19:31.956 [2024-11-27 06:14:36.831274] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.956 [2024-11-27 06:14:36.831283] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.956 [2024-11-27 06:14:36.831287] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.956 [2024-11-27 06:14:36.831292] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77abc0) on tqpair=0x716750 00:19:31.956 [2024-11-27 06:14:36.831302] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:19:31.956 00:19:31.956 06:14:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:31.957 [2024-11-27 06:14:36.875643] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:19:31.957 [2024-11-27 06:14:36.875705] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74482 ] 00:19:31.957 [2024-11-27 06:14:37.041810] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:19:31.957 [2024-11-27 06:14:37.041892] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:31.957 [2024-11-27 06:14:37.041904] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:31.957 [2024-11-27 06:14:37.041925] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:31.957 [2024-11-27 06:14:37.041942] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:31.957 [2024-11-27 06:14:37.042373] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:19:31.957 [2024-11-27 06:14:37.042443] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x19f6750 0 00:19:32.218 [2024-11-27 06:14:37.056159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:32.218 [2024-11-27 06:14:37.056190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:32.218 [2024-11-27 06:14:37.056199] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:32.218 [2024-11-27 06:14:37.056205] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:32.218 [2024-11-27 06:14:37.056252] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.218 [2024-11-27 06:14:37.056262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.218 [2024-11-27 06:14:37.056268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19f6750) 00:19:32.218 [2024-11-27 06:14:37.056289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:32.218 [2024-11-27 06:14:37.056331] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5a740, cid 0, qid 0 00:19:32.218 [2024-11-27 06:14:37.064157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.218 [2024-11-27 06:14:37.064182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.218 [2024-11-27 06:14:37.064188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.218 [2024-11-27 06:14:37.064194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5a740) on tqpair=0x19f6750 00:19:32.218 [2024-11-27 06:14:37.064206] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:32.218 [2024-11-27 06:14:37.064215] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:19:32.218 [2024-11-27 06:14:37.064222] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:19:32.218 [2024-11-27 06:14:37.064242] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.218 [2024-11-27 06:14:37.064249] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.218 [2024-11-27 06:14:37.064260] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19f6750) 00:19:32.218 [2024-11-27 06:14:37.064270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.218 [2024-11-27 06:14:37.064302] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5a740, cid 0, qid 0 00:19:32.218 [2024-11-27 06:14:37.064364] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.218 [2024-11-27 06:14:37.064371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.218 [2024-11-27 06:14:37.064376] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.218 [2024-11-27 06:14:37.064380] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5a740) on tqpair=0x19f6750 00:19:32.218 [2024-11-27 06:14:37.064387] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:19:32.218 [2024-11-27 06:14:37.064398] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:19:32.218 [2024-11-27 06:14:37.064412] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.218 [2024-11-27 06:14:37.064419] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.218 [2024-11-27 06:14:37.064425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19f6750) 00:19:32.218 [2024-11-27 06:14:37.064436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.218 [2024-11-27 06:14:37.064465] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5a740, cid 0, qid 0 00:19:32.218 [2024-11-27 06:14:37.064513] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.218 [2024-11-27 06:14:37.064523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.218 [2024-11-27 06:14:37.064527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.218 [2024-11-27 06:14:37.064532] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5a740) on tqpair=0x19f6750 00:19:32.218 [2024-11-27 06:14:37.064539] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:19:32.218 [2024-11-27 06:14:37.064549] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:19:32.218 [2024-11-27 06:14:37.064557] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.218 [2024-11-27 06:14:37.064562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.218 [2024-11-27 06:14:37.064566] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19f6750) 00:19:32.218 [2024-11-27 06:14:37.064575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.218 [2024-11-27 06:14:37.064598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5a740, cid 0, qid 0 00:19:32.218 [2024-11-27 06:14:37.064649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.218 [2024-11-27 06:14:37.064657] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.218 [2024-11-27 06:14:37.064661] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.218 [2024-11-27 06:14:37.064665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5a740) on tqpair=0x19f6750 00:19:32.218 [2024-11-27 06:14:37.064672] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:32.218 [2024-11-27 06:14:37.064684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.218 [2024-11-27 06:14:37.064689] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.218 [2024-11-27 06:14:37.064693] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19f6750) 00:19:32.218 [2024-11-27 06:14:37.064701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.218 [2024-11-27 06:14:37.064723] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5a740, cid 0, qid 0 00:19:32.218 [2024-11-27 06:14:37.064775] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.218 [2024-11-27 06:14:37.064786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.218 [2024-11-27 06:14:37.064791] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.218 [2024-11-27 06:14:37.064798] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5a740) on tqpair=0x19f6750 00:19:32.218 [2024-11-27 06:14:37.064805] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:19:32.218 [2024-11-27 06:14:37.064814] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:19:32.218 [2024-11-27 06:14:37.064825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:32.218 [2024-11-27 06:14:37.064941] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:19:32.218 [2024-11-27 06:14:37.064960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:32.218 [2024-11-27 06:14:37.064975] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.218 [2024-11-27 06:14:37.064981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.218 [2024-11-27 06:14:37.064987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19f6750) 00:19:32.218 [2024-11-27 06:14:37.064999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.218 [2024-11-27 06:14:37.065028] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5a740, cid 0, qid 0 00:19:32.218 [2024-11-27 06:14:37.065075] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.218 [2024-11-27 06:14:37.065084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.218 [2024-11-27 06:14:37.065090] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.218 [2024-11-27 06:14:37.065096] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5a740) on tqpair=0x19f6750 00:19:32.218 [2024-11-27 06:14:37.065105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:32.218 [2024-11-27 06:14:37.065119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.218 [2024-11-27 06:14:37.065143] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.218 [2024-11-27 06:14:37.065151] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19f6750) 00:19:32.218 [2024-11-27 06:14:37.065162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.218 [2024-11-27 06:14:37.065191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5a740, cid 0, qid 0 00:19:32.218 [2024-11-27 06:14:37.065238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.218 [2024-11-27 06:14:37.065247] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.218 [2024-11-27 06:14:37.065253] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.218 [2024-11-27 06:14:37.065259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5a740) on tqpair=0x19f6750 00:19:32.218 [2024-11-27 06:14:37.065267] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:32.218 [2024-11-27 06:14:37.065275] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:19:32.218 [2024-11-27 06:14:37.065287] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:19:32.218 [2024-11-27 06:14:37.065302] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:19:32.218 [2024-11-27 06:14:37.065317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.065324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19f6750) 00:19:32.219 [2024-11-27 06:14:37.065335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.219 [2024-11-27 06:14:37.065362] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5a740, cid 0, qid 0 00:19:32.219 [2024-11-27 06:14:37.065462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:32.219 [2024-11-27 06:14:37.065485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:32.219 [2024-11-27 06:14:37.065494] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.065500] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19f6750): datao=0, datal=4096, cccid=0 00:19:32.219 [2024-11-27 06:14:37.065507] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a5a740) on tqpair(0x19f6750): expected_datao=0, payload_size=4096 00:19:32.219 [2024-11-27 06:14:37.065515] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.065527] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.065534] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.065547] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.219 [2024-11-27 06:14:37.065556] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.219 [2024-11-27 06:14:37.065561] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.065568] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5a740) on tqpair=0x19f6750 00:19:32.219 [2024-11-27 06:14:37.065580] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:19:32.219 [2024-11-27 06:14:37.065589] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:19:32.219 [2024-11-27 06:14:37.065596] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:19:32.219 [2024-11-27 06:14:37.065610] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:19:32.219 [2024-11-27 06:14:37.065618] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:19:32.219 [2024-11-27 06:14:37.065626] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:19:32.219 [2024-11-27 06:14:37.065639] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:19:32.219 [2024-11-27 06:14:37.065651] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.065656] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.065660] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19f6750) 00:19:32.219 [2024-11-27 06:14:37.065669] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:32.219 [2024-11-27 06:14:37.065694] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5a740, cid 0, qid 0 00:19:32.219 [2024-11-27 06:14:37.065745] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.219 [2024-11-27 06:14:37.065755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.219 [2024-11-27 06:14:37.065765] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.065776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5a740) on tqpair=0x19f6750 00:19:32.219 [2024-11-27 06:14:37.065790] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.065798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.065802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19f6750) 00:19:32.219 [2024-11-27 06:14:37.065810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.219 [2024-11-27 06:14:37.065817] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.065822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.065825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x19f6750) 00:19:32.219 [2024-11-27 06:14:37.065832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.219 [2024-11-27 06:14:37.065838] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.065842] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.065846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x19f6750) 00:19:32.219 [2024-11-27 06:14:37.065852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.219 [2024-11-27 06:14:37.065859] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.065863] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.065867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.219 [2024-11-27 06:14:37.065873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.219 [2024-11-27 06:14:37.065879] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:32.219 [2024-11-27 06:14:37.065891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:32.219 [2024-11-27 06:14:37.065899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.065903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19f6750) 00:19:32.219 [2024-11-27 06:14:37.065911] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.219 [2024-11-27 06:14:37.065945] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5a740, cid 0, qid 0 00:19:32.219 [2024-11-27 06:14:37.065954] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5a8c0, cid 1, qid 0 00:19:32.219 [2024-11-27 06:14:37.065959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5aa40, cid 2, qid 0 00:19:32.219 [2024-11-27 06:14:37.065964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.219 [2024-11-27 06:14:37.065969] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5ad40, cid 4, qid 0 00:19:32.219 [2024-11-27 06:14:37.066053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.219 [2024-11-27 06:14:37.066061] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.219 [2024-11-27 06:14:37.066065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.066070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5ad40) on tqpair=0x19f6750 00:19:32.219 [2024-11-27 06:14:37.066076] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:19:32.219 [2024-11-27 06:14:37.066082] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:32.219 [2024-11-27 06:14:37.066092] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:19:32.219 [2024-11-27 06:14:37.066099] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:32.219 [2024-11-27 06:14:37.066106] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.066111] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.066115] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19f6750) 00:19:32.219 [2024-11-27 06:14:37.066123] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:32.219 [2024-11-27 06:14:37.066163] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5ad40, cid 4, qid 0 00:19:32.219 [2024-11-27 06:14:37.066229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.219 [2024-11-27 06:14:37.066239] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.219 [2024-11-27 06:14:37.066247] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.066252] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5ad40) on tqpair=0x19f6750 00:19:32.219 [2024-11-27 06:14:37.066326] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:19:32.219 [2024-11-27 06:14:37.066346] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:32.219 [2024-11-27 06:14:37.066360] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.066365] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19f6750) 00:19:32.219 [2024-11-27 06:14:37.066373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.219 [2024-11-27 06:14:37.066398] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5ad40, cid 4, qid 0 00:19:32.219 [2024-11-27 06:14:37.066460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:32.219 [2024-11-27 06:14:37.066468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:32.219 [2024-11-27 06:14:37.066473] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.066477] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19f6750): datao=0, datal=4096, cccid=4 00:19:32.219 [2024-11-27 06:14:37.066481] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a5ad40) on tqpair(0x19f6750): expected_datao=0, payload_size=4096 00:19:32.219 [2024-11-27 06:14:37.066486] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.066495] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.066499] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.066508] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.219 [2024-11-27 06:14:37.066515] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.219 [2024-11-27 06:14:37.066519] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.066523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5ad40) on tqpair=0x19f6750 00:19:32.219 [2024-11-27 06:14:37.066536] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:19:32.219 [2024-11-27 06:14:37.066551] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:19:32.219 [2024-11-27 06:14:37.066562] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:19:32.219 [2024-11-27 06:14:37.066571] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.066576] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19f6750) 00:19:32.219 [2024-11-27 06:14:37.066585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.219 [2024-11-27 06:14:37.066615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5ad40, cid 4, qid 0 00:19:32.219 [2024-11-27 06:14:37.066686] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:32.219 [2024-11-27 06:14:37.066694] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:32.219 [2024-11-27 06:14:37.066698] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.066702] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19f6750): datao=0, datal=4096, cccid=4 00:19:32.219 [2024-11-27 06:14:37.066707] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a5ad40) on tqpair(0x19f6750): expected_datao=0, payload_size=4096 00:19:32.219 [2024-11-27 06:14:37.066712] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.066720] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.066724] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.066733] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.219 [2024-11-27 06:14:37.066739] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.219 [2024-11-27 06:14:37.066743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.066748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5ad40) on tqpair=0x19f6750 00:19:32.219 [2024-11-27 06:14:37.066769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:32.219 [2024-11-27 06:14:37.066782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:32.219 [2024-11-27 06:14:37.066792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.066796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19f6750) 00:19:32.219 [2024-11-27 06:14:37.066805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.219 [2024-11-27 06:14:37.066828] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5ad40, cid 4, qid 0 00:19:32.219 [2024-11-27 06:14:37.066883] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:32.219 [2024-11-27 06:14:37.066892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:32.219 [2024-11-27 06:14:37.066897] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.066901] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19f6750): datao=0, datal=4096, cccid=4 00:19:32.219 [2024-11-27 06:14:37.066906] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a5ad40) on tqpair(0x19f6750): expected_datao=0, payload_size=4096 00:19:32.219 [2024-11-27 06:14:37.066910] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.066918] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.066923] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.066931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.219 [2024-11-27 06:14:37.066938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.219 [2024-11-27 06:14:37.066942] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.219 [2024-11-27 06:14:37.066946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5ad40) on tqpair=0x19f6750 00:19:32.219 [2024-11-27 06:14:37.066956] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:32.220 [2024-11-27 06:14:37.066966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:19:32.220 [2024-11-27 06:14:37.066979] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:19:32.220 [2024-11-27 06:14:37.066986] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:19:32.220 [2024-11-27 06:14:37.066992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:32.220 [2024-11-27 06:14:37.066998] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:19:32.220 [2024-11-27 06:14:37.067004] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:19:32.220 [2024-11-27 06:14:37.067009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:19:32.220 [2024-11-27 06:14:37.067015] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:19:32.220 [2024-11-27 06:14:37.067032] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.067037] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19f6750) 00:19:32.220 [2024-11-27 06:14:37.067045] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.220 [2024-11-27 06:14:37.067053] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.067058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.067062] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19f6750) 00:19:32.220 [2024-11-27 06:14:37.067068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.220 [2024-11-27 06:14:37.067097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5ad40, cid 4, qid 0 00:19:32.220 [2024-11-27 06:14:37.067105] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5aec0, cid 5, qid 0 00:19:32.220 [2024-11-27 06:14:37.067188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.220 [2024-11-27 06:14:37.067199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.220 [2024-11-27 06:14:37.067204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.067208] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5ad40) on tqpair=0x19f6750 00:19:32.220 [2024-11-27 06:14:37.067216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.220 [2024-11-27 06:14:37.067222] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.220 [2024-11-27 06:14:37.067226] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.067230] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5aec0) on tqpair=0x19f6750 00:19:32.220 [2024-11-27 06:14:37.067242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.067247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19f6750) 00:19:32.220 [2024-11-27 06:14:37.067255] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.220 [2024-11-27 06:14:37.067279] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5aec0, cid 5, qid 0 00:19:32.220 [2024-11-27 06:14:37.067323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.220 [2024-11-27 06:14:37.067331] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.220 [2024-11-27 06:14:37.067335] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.067339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5aec0) on tqpair=0x19f6750 00:19:32.220 [2024-11-27 06:14:37.067351] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.067356] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19f6750) 00:19:32.220 [2024-11-27 06:14:37.067363] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.220 [2024-11-27 06:14:37.067384] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5aec0, cid 5, qid 0 00:19:32.220 [2024-11-27 06:14:37.067428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.220 [2024-11-27 06:14:37.067435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.220 [2024-11-27 06:14:37.067439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.067443] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5aec0) on tqpair=0x19f6750 00:19:32.220 [2024-11-27 06:14:37.067455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.067460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19f6750) 00:19:32.220 [2024-11-27 06:14:37.067468] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.220 [2024-11-27 06:14:37.067488] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5aec0, cid 5, qid 0 00:19:32.220 [2024-11-27 06:14:37.067532] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.220 [2024-11-27 06:14:37.067539] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.220 [2024-11-27 06:14:37.067543] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.067548] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5aec0) on tqpair=0x19f6750 00:19:32.220 [2024-11-27 06:14:37.067569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.067576] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19f6750) 00:19:32.220 [2024-11-27 06:14:37.067584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.220 [2024-11-27 06:14:37.067592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.067597] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19f6750) 00:19:32.220 [2024-11-27 06:14:37.067603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.220 [2024-11-27 06:14:37.067611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.067616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x19f6750) 00:19:32.220 [2024-11-27 06:14:37.067629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.220 [2024-11-27 06:14:37.067638] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.067642] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19f6750) 00:19:32.220 [2024-11-27 06:14:37.067649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.220 [2024-11-27 06:14:37.067672] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5aec0, cid 5, qid 0 00:19:32.220 [2024-11-27 06:14:37.067680] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5ad40, cid 4, qid 0 00:19:32.220 [2024-11-27 06:14:37.067685] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5b040, cid 6, qid 0 00:19:32.220 [2024-11-27 06:14:37.067691] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5b1c0, cid 7, qid 0 00:19:32.220 [2024-11-27 06:14:37.067821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:32.220 [2024-11-27 06:14:37.067852] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:32.220 [2024-11-27 06:14:37.067857] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.067861] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19f6750): datao=0, datal=8192, cccid=5 00:19:32.220 [2024-11-27 06:14:37.067866] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a5aec0) on tqpair(0x19f6750): expected_datao=0, payload_size=8192 00:19:32.220 [2024-11-27 06:14:37.067872] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.067897] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.067905] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.067913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:32.220 [2024-11-27 06:14:37.067921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:32.220 [2024-11-27 06:14:37.067926] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.067932] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19f6750): datao=0, datal=512, cccid=4 00:19:32.220 [2024-11-27 06:14:37.067939] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a5ad40) on tqpair(0x19f6750): expected_datao=0, payload_size=512 00:19:32.220 [2024-11-27 06:14:37.067946] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.067954] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.067960] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.067968] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:32.220 [2024-11-27 06:14:37.067976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:32.220 [2024-11-27 06:14:37.067981] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.067987] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19f6750): datao=0, datal=512, cccid=6 00:19:32.220 [2024-11-27 06:14:37.067994] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a5b040) on tqpair(0x19f6750): expected_datao=0, payload_size=512 00:19:32.220 [2024-11-27 06:14:37.068000] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.068009] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.068015] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.068023] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:32.220 [2024-11-27 06:14:37.068032] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:32.220 [2024-11-27 06:14:37.068038] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.068044] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19f6750): datao=0, datal=4096, cccid=7 00:19:32.220 [2024-11-27 06:14:37.068051] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a5b1c0) on tqpair(0x19f6750): expected_datao=0, payload_size=4096 00:19:32.220 [2024-11-27 06:14:37.068058] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.068067] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.068072] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.068080] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.220 [2024-11-27 06:14:37.068088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.220 [2024-11-27 06:14:37.068094] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.068100] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5aec0) on tqpair=0x19f6750 00:19:32.220 [2024-11-27 06:14:37.068123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.220 [2024-11-27 06:14:37.072167] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.220 [2024-11-27 06:14:37.072178] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.072185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5ad40) on tqpair=0x19f6750 00:19:32.220 [2024-11-27 06:14:37.072208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.220 [2024-11-27 06:14:37.072217] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.220 [2024-11-27 06:14:37.072222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.072228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5b040) on tqpair=0x19f6750 00:19:32.220 [2024-11-27 06:14:37.072239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.220 [2024-11-27 06:14:37.072247] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.220 [2024-11-27 06:14:37.072252] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.220 [2024-11-27 06:14:37.072258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5b1c0) on tqpair=0x19f6750 00:19:32.220 ===================================================== 00:19:32.220 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:32.220 ===================================================== 00:19:32.220 Controller Capabilities/Features 00:19:32.220 ================================ 00:19:32.220 Vendor ID: 8086 00:19:32.220 Subsystem Vendor ID: 8086 00:19:32.220 Serial Number: SPDK00000000000001 00:19:32.220 Model Number: SPDK bdev Controller 00:19:32.220 Firmware Version: 25.01 00:19:32.220 Recommended Arb Burst: 6 00:19:32.220 IEEE OUI Identifier: e4 d2 5c 00:19:32.220 Multi-path I/O 00:19:32.220 May have multiple subsystem ports: Yes 00:19:32.220 May have multiple controllers: Yes 00:19:32.220 Associated with SR-IOV VF: No 00:19:32.220 Max Data Transfer Size: 131072 00:19:32.220 Max Number of Namespaces: 32 00:19:32.220 Max Number of I/O Queues: 127 00:19:32.220 NVMe Specification Version (VS): 1.3 00:19:32.220 NVMe Specification Version (Identify): 1.3 00:19:32.220 Maximum Queue Entries: 128 00:19:32.220 Contiguous Queues Required: Yes 00:19:32.220 Arbitration Mechanisms Supported 00:19:32.220 Weighted Round Robin: Not Supported 00:19:32.220 Vendor Specific: Not Supported 00:19:32.220 Reset Timeout: 15000 ms 00:19:32.220 Doorbell Stride: 4 bytes 00:19:32.220 NVM Subsystem Reset: Not Supported 00:19:32.220 Command Sets Supported 00:19:32.220 NVM Command Set: Supported 00:19:32.220 Boot Partition: Not Supported 00:19:32.220 Memory Page Size Minimum: 4096 bytes 00:19:32.220 Memory Page Size Maximum: 4096 bytes 00:19:32.220 Persistent Memory Region: Not Supported 00:19:32.220 Optional Asynchronous Events Supported 00:19:32.220 Namespace Attribute Notices: Supported 00:19:32.220 Firmware Activation Notices: Not Supported 00:19:32.220 ANA Change Notices: Not Supported 00:19:32.220 PLE Aggregate Log Change Notices: Not Supported 00:19:32.220 LBA Status Info Alert Notices: Not Supported 00:19:32.220 EGE Aggregate Log Change Notices: Not Supported 00:19:32.220 Normal NVM Subsystem Shutdown event: Not Supported 00:19:32.220 Zone Descriptor Change Notices: Not Supported 00:19:32.220 Discovery Log Change Notices: Not Supported 00:19:32.220 Controller Attributes 00:19:32.220 128-bit Host Identifier: Supported 00:19:32.220 Non-Operational Permissive Mode: Not Supported 00:19:32.220 NVM Sets: Not Supported 00:19:32.220 Read Recovery Levels: Not Supported 00:19:32.220 Endurance Groups: Not Supported 00:19:32.220 Predictable Latency Mode: Not Supported 00:19:32.220 Traffic Based Keep ALive: Not Supported 00:19:32.220 Namespace Granularity: Not Supported 00:19:32.220 SQ Associations: Not Supported 00:19:32.220 UUID List: Not Supported 00:19:32.220 Multi-Domain Subsystem: Not Supported 00:19:32.220 Fixed Capacity Management: Not Supported 00:19:32.220 Variable Capacity Management: Not Supported 00:19:32.220 Delete Endurance Group: Not Supported 00:19:32.221 Delete NVM Set: Not Supported 00:19:32.221 Extended LBA Formats Supported: Not Supported 00:19:32.221 Flexible Data Placement Supported: Not Supported 00:19:32.221 00:19:32.221 Controller Memory Buffer Support 00:19:32.221 ================================ 00:19:32.221 Supported: No 00:19:32.221 00:19:32.221 Persistent Memory Region Support 00:19:32.221 ================================ 00:19:32.221 Supported: No 00:19:32.221 00:19:32.221 Admin Command Set Attributes 00:19:32.221 ============================ 00:19:32.221 Security Send/Receive: Not Supported 00:19:32.221 Format NVM: Not Supported 00:19:32.221 Firmware Activate/Download: Not Supported 00:19:32.221 Namespace Management: Not Supported 00:19:32.221 Device Self-Test: Not Supported 00:19:32.221 Directives: Not Supported 00:19:32.221 NVMe-MI: Not Supported 00:19:32.221 Virtualization Management: Not Supported 00:19:32.221 Doorbell Buffer Config: Not Supported 00:19:32.221 Get LBA Status Capability: Not Supported 00:19:32.221 Command & Feature Lockdown Capability: Not Supported 00:19:32.221 Abort Command Limit: 4 00:19:32.221 Async Event Request Limit: 4 00:19:32.221 Number of Firmware Slots: N/A 00:19:32.221 Firmware Slot 1 Read-Only: N/A 00:19:32.221 Firmware Activation Without Reset: N/A 00:19:32.221 Multiple Update Detection Support: N/A 00:19:32.221 Firmware Update Granularity: No Information Provided 00:19:32.221 Per-Namespace SMART Log: No 00:19:32.221 Asymmetric Namespace Access Log Page: Not Supported 00:19:32.221 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:32.221 Command Effects Log Page: Supported 00:19:32.221 Get Log Page Extended Data: Supported 00:19:32.221 Telemetry Log Pages: Not Supported 00:19:32.221 Persistent Event Log Pages: Not Supported 00:19:32.221 Supported Log Pages Log Page: May Support 00:19:32.221 Commands Supported & Effects Log Page: Not Supported 00:19:32.221 Feature Identifiers & Effects Log Page:May Support 00:19:32.221 NVMe-MI Commands & Effects Log Page: May Support 00:19:32.221 Data Area 4 for Telemetry Log: Not Supported 00:19:32.221 Error Log Page Entries Supported: 128 00:19:32.221 Keep Alive: Supported 00:19:32.221 Keep Alive Granularity: 10000 ms 00:19:32.221 00:19:32.221 NVM Command Set Attributes 00:19:32.221 ========================== 00:19:32.221 Submission Queue Entry Size 00:19:32.221 Max: 64 00:19:32.221 Min: 64 00:19:32.221 Completion Queue Entry Size 00:19:32.221 Max: 16 00:19:32.221 Min: 16 00:19:32.221 Number of Namespaces: 32 00:19:32.221 Compare Command: Supported 00:19:32.221 Write Uncorrectable Command: Not Supported 00:19:32.221 Dataset Management Command: Supported 00:19:32.221 Write Zeroes Command: Supported 00:19:32.221 Set Features Save Field: Not Supported 00:19:32.221 Reservations: Supported 00:19:32.221 Timestamp: Not Supported 00:19:32.221 Copy: Supported 00:19:32.221 Volatile Write Cache: Present 00:19:32.221 Atomic Write Unit (Normal): 1 00:19:32.221 Atomic Write Unit (PFail): 1 00:19:32.221 Atomic Compare & Write Unit: 1 00:19:32.221 Fused Compare & Write: Supported 00:19:32.221 Scatter-Gather List 00:19:32.221 SGL Command Set: Supported 00:19:32.221 SGL Keyed: Supported 00:19:32.221 SGL Bit Bucket Descriptor: Not Supported 00:19:32.221 SGL Metadata Pointer: Not Supported 00:19:32.221 Oversized SGL: Not Supported 00:19:32.221 SGL Metadata Address: Not Supported 00:19:32.221 SGL Offset: Supported 00:19:32.221 Transport SGL Data Block: Not Supported 00:19:32.221 Replay Protected Memory Block: Not Supported 00:19:32.221 00:19:32.221 Firmware Slot Information 00:19:32.221 ========================= 00:19:32.221 Active slot: 1 00:19:32.221 Slot 1 Firmware Revision: 25.01 00:19:32.221 00:19:32.221 00:19:32.221 Commands Supported and Effects 00:19:32.221 ============================== 00:19:32.221 Admin Commands 00:19:32.221 -------------- 00:19:32.221 Get Log Page (02h): Supported 00:19:32.221 Identify (06h): Supported 00:19:32.221 Abort (08h): Supported 00:19:32.221 Set Features (09h): Supported 00:19:32.221 Get Features (0Ah): Supported 00:19:32.221 Asynchronous Event Request (0Ch): Supported 00:19:32.221 Keep Alive (18h): Supported 00:19:32.221 I/O Commands 00:19:32.221 ------------ 00:19:32.221 Flush (00h): Supported LBA-Change 00:19:32.221 Write (01h): Supported LBA-Change 00:19:32.221 Read (02h): Supported 00:19:32.221 Compare (05h): Supported 00:19:32.221 Write Zeroes (08h): Supported LBA-Change 00:19:32.221 Dataset Management (09h): Supported LBA-Change 00:19:32.221 Copy (19h): Supported LBA-Change 00:19:32.221 00:19:32.221 Error Log 00:19:32.221 ========= 00:19:32.221 00:19:32.221 Arbitration 00:19:32.221 =========== 00:19:32.221 Arbitration Burst: 1 00:19:32.221 00:19:32.221 Power Management 00:19:32.221 ================ 00:19:32.221 Number of Power States: 1 00:19:32.221 Current Power State: Power State #0 00:19:32.221 Power State #0: 00:19:32.221 Max Power: 0.00 W 00:19:32.221 Non-Operational State: Operational 00:19:32.221 Entry Latency: Not Reported 00:19:32.221 Exit Latency: Not Reported 00:19:32.221 Relative Read Throughput: 0 00:19:32.221 Relative Read Latency: 0 00:19:32.221 Relative Write Throughput: 0 00:19:32.221 Relative Write Latency: 0 00:19:32.221 Idle Power: Not Reported 00:19:32.221 Active Power: Not Reported 00:19:32.221 Non-Operational Permissive Mode: Not Supported 00:19:32.221 00:19:32.221 Health Information 00:19:32.221 ================== 00:19:32.221 Critical Warnings: 00:19:32.221 Available Spare Space: OK 00:19:32.221 Temperature: OK 00:19:32.221 Device Reliability: OK 00:19:32.221 Read Only: No 00:19:32.221 Volatile Memory Backup: OK 00:19:32.221 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:32.221 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:32.221 Available Spare: 0% 00:19:32.221 Available Spare Threshold: 0% 00:19:32.221 Life Percentage Used:[2024-11-27 06:14:37.072405] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.221 [2024-11-27 06:14:37.072415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19f6750) 00:19:32.221 [2024-11-27 06:14:37.072428] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.221 [2024-11-27 06:14:37.072466] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5b1c0, cid 7, qid 0 00:19:32.221 [2024-11-27 06:14:37.072532] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.221 [2024-11-27 06:14:37.072542] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.221 [2024-11-27 06:14:37.072548] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.221 [2024-11-27 06:14:37.072555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5b1c0) on tqpair=0x19f6750 00:19:32.221 [2024-11-27 06:14:37.072606] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:19:32.221 [2024-11-27 06:14:37.072621] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5a740) on tqpair=0x19f6750 00:19:32.221 [2024-11-27 06:14:37.072628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.221 [2024-11-27 06:14:37.072643] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5a8c0) on tqpair=0x19f6750 00:19:32.221 [2024-11-27 06:14:37.072648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.221 [2024-11-27 06:14:37.072654] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5aa40) on tqpair=0x19f6750 00:19:32.221 [2024-11-27 06:14:37.072659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.221 [2024-11-27 06:14:37.072665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.221 [2024-11-27 06:14:37.072670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.221 [2024-11-27 06:14:37.072680] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.221 [2024-11-27 06:14:37.072684] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.221 [2024-11-27 06:14:37.072689] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.221 [2024-11-27 06:14:37.072701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.221 [2024-11-27 06:14:37.072735] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.221 [2024-11-27 06:14:37.072783] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.221 [2024-11-27 06:14:37.072792] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.221 [2024-11-27 06:14:37.072798] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.221 [2024-11-27 06:14:37.072804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.221 [2024-11-27 06:14:37.072816] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.221 [2024-11-27 06:14:37.072822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.221 [2024-11-27 06:14:37.072828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.221 [2024-11-27 06:14:37.072839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.221 [2024-11-27 06:14:37.072869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.221 [2024-11-27 06:14:37.072935] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.221 [2024-11-27 06:14:37.072953] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.221 [2024-11-27 06:14:37.072959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.221 [2024-11-27 06:14:37.072966] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.221 [2024-11-27 06:14:37.072974] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:19:32.221 [2024-11-27 06:14:37.072982] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:19:32.221 [2024-11-27 06:14:37.072996] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.221 [2024-11-27 06:14:37.073003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.221 [2024-11-27 06:14:37.073009] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.221 [2024-11-27 06:14:37.073020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.221 [2024-11-27 06:14:37.073049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.221 [2024-11-27 06:14:37.073097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.221 [2024-11-27 06:14:37.073107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.221 [2024-11-27 06:14:37.073112] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.221 [2024-11-27 06:14:37.073119] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.221 [2024-11-27 06:14:37.073158] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.221 [2024-11-27 06:14:37.073167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.221 [2024-11-27 06:14:37.073173] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.221 [2024-11-27 06:14:37.073184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.221 [2024-11-27 06:14:37.073212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.221 [2024-11-27 06:14:37.073261] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.221 [2024-11-27 06:14:37.073271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.221 [2024-11-27 06:14:37.073277] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.221 [2024-11-27 06:14:37.073283] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.221 [2024-11-27 06:14:37.073298] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.221 [2024-11-27 06:14:37.073305] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.221 [2024-11-27 06:14:37.073310] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.222 [2024-11-27 06:14:37.073321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.222 [2024-11-27 06:14:37.073346] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.222 [2024-11-27 06:14:37.073394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.222 [2024-11-27 06:14:37.073403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.222 [2024-11-27 06:14:37.073409] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.073415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.222 [2024-11-27 06:14:37.073430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.073437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.073442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.222 [2024-11-27 06:14:37.073453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.222 [2024-11-27 06:14:37.073478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.222 [2024-11-27 06:14:37.073520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.222 [2024-11-27 06:14:37.073529] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.222 [2024-11-27 06:14:37.073535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.073541] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.222 [2024-11-27 06:14:37.073556] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.073562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.073568] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.222 [2024-11-27 06:14:37.073579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.222 [2024-11-27 06:14:37.073604] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.222 [2024-11-27 06:14:37.073653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.222 [2024-11-27 06:14:37.073663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.222 [2024-11-27 06:14:37.073673] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.073680] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.222 [2024-11-27 06:14:37.073694] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.073701] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.073707] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.222 [2024-11-27 06:14:37.073718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.222 [2024-11-27 06:14:37.073745] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.222 [2024-11-27 06:14:37.073787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.222 [2024-11-27 06:14:37.073797] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.222 [2024-11-27 06:14:37.073802] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.073809] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.222 [2024-11-27 06:14:37.073824] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.073831] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.073836] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.222 [2024-11-27 06:14:37.073847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.222 [2024-11-27 06:14:37.073872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.222 [2024-11-27 06:14:37.073931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.222 [2024-11-27 06:14:37.073947] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.222 [2024-11-27 06:14:37.073953] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.073960] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.222 [2024-11-27 06:14:37.073975] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.073982] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.073988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.222 [2024-11-27 06:14:37.073999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.222 [2024-11-27 06:14:37.074025] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.222 [2024-11-27 06:14:37.074071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.222 [2024-11-27 06:14:37.074093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.222 [2024-11-27 06:14:37.074100] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.074106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.222 [2024-11-27 06:14:37.074122] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.074144] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.074151] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.222 [2024-11-27 06:14:37.074162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.222 [2024-11-27 06:14:37.074210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.222 [2024-11-27 06:14:37.074266] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.222 [2024-11-27 06:14:37.074275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.222 [2024-11-27 06:14:37.074279] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.074284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.222 [2024-11-27 06:14:37.074296] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.074302] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.074306] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.222 [2024-11-27 06:14:37.074314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.222 [2024-11-27 06:14:37.074337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.222 [2024-11-27 06:14:37.074378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.222 [2024-11-27 06:14:37.074385] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.222 [2024-11-27 06:14:37.074389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.074394] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.222 [2024-11-27 06:14:37.074406] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.074411] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.074415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.222 [2024-11-27 06:14:37.074423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.222 [2024-11-27 06:14:37.074444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.222 [2024-11-27 06:14:37.074491] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.222 [2024-11-27 06:14:37.074498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.222 [2024-11-27 06:14:37.074502] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.074507] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.222 [2024-11-27 06:14:37.074518] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.074523] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.074527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.222 [2024-11-27 06:14:37.074535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.222 [2024-11-27 06:14:37.074556] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.222 [2024-11-27 06:14:37.074599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.222 [2024-11-27 06:14:37.074606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.222 [2024-11-27 06:14:37.074610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.074615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.222 [2024-11-27 06:14:37.074626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.074631] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.074636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.222 [2024-11-27 06:14:37.074644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.222 [2024-11-27 06:14:37.074664] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.222 [2024-11-27 06:14:37.074716] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.222 [2024-11-27 06:14:37.074723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.222 [2024-11-27 06:14:37.074728] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.074733] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.222 [2024-11-27 06:14:37.074749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.074757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.074763] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.222 [2024-11-27 06:14:37.074773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.222 [2024-11-27 06:14:37.074800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.222 [2024-11-27 06:14:37.074841] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.222 [2024-11-27 06:14:37.074851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.222 [2024-11-27 06:14:37.074856] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.074863] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.222 [2024-11-27 06:14:37.074877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.074884] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.074890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.222 [2024-11-27 06:14:37.074900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.222 [2024-11-27 06:14:37.074925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.222 [2024-11-27 06:14:37.074970] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.222 [2024-11-27 06:14:37.074979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.222 [2024-11-27 06:14:37.074985] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.074991] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.222 [2024-11-27 06:14:37.075006] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.075012] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.075018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.222 [2024-11-27 06:14:37.075028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.222 [2024-11-27 06:14:37.075053] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.222 [2024-11-27 06:14:37.075099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.222 [2024-11-27 06:14:37.075115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.222 [2024-11-27 06:14:37.075121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.075145] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.222 [2024-11-27 06:14:37.075162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.075170] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.075175] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.222 [2024-11-27 06:14:37.075186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.222 [2024-11-27 06:14:37.075215] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.222 [2024-11-27 06:14:37.075264] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.222 [2024-11-27 06:14:37.075273] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.222 [2024-11-27 06:14:37.075277] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.075282] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.222 [2024-11-27 06:14:37.075293] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.075299] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.075303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.222 [2024-11-27 06:14:37.075311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.222 [2024-11-27 06:14:37.075334] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.222 [2024-11-27 06:14:37.075378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.222 [2024-11-27 06:14:37.075390] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.222 [2024-11-27 06:14:37.075395] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.075400] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.222 [2024-11-27 06:14:37.075412] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.075418] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.075422] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.222 [2024-11-27 06:14:37.075430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.222 [2024-11-27 06:14:37.075451] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.222 [2024-11-27 06:14:37.075517] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.222 [2024-11-27 06:14:37.075529] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.222 [2024-11-27 06:14:37.075534] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.075539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.222 [2024-11-27 06:14:37.075551] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.075556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.222 [2024-11-27 06:14:37.075560] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.223 [2024-11-27 06:14:37.075569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.223 [2024-11-27 06:14:37.075590] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.223 [2024-11-27 06:14:37.075636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.223 [2024-11-27 06:14:37.075643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.223 [2024-11-27 06:14:37.075647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.223 [2024-11-27 06:14:37.075651] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.223 [2024-11-27 06:14:37.075663] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.223 [2024-11-27 06:14:37.075668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.223 [2024-11-27 06:14:37.075673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.223 [2024-11-27 06:14:37.075685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.223 [2024-11-27 06:14:37.075709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.223 [2024-11-27 06:14:37.075758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.223 [2024-11-27 06:14:37.075767] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.223 [2024-11-27 06:14:37.075771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.223 [2024-11-27 06:14:37.075776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.223 [2024-11-27 06:14:37.075788] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.223 [2024-11-27 06:14:37.075793] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.223 [2024-11-27 06:14:37.075797] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.223 [2024-11-27 06:14:37.075805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.223 [2024-11-27 06:14:37.075826] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.223 [2024-11-27 06:14:37.075869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.223 [2024-11-27 06:14:37.075876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.223 [2024-11-27 06:14:37.075880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.223 [2024-11-27 06:14:37.075885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.223 [2024-11-27 06:14:37.075896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.223 [2024-11-27 06:14:37.075902] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.223 [2024-11-27 06:14:37.075906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.223 [2024-11-27 06:14:37.075914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.223 [2024-11-27 06:14:37.075934] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.223 [2024-11-27 06:14:37.075991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.223 [2024-11-27 06:14:37.075998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.223 [2024-11-27 06:14:37.076003] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.223 [2024-11-27 06:14:37.076007] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.223 [2024-11-27 06:14:37.076019] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.223 [2024-11-27 06:14:37.076024] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.223 [2024-11-27 06:14:37.076028] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.223 [2024-11-27 06:14:37.076036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.223 [2024-11-27 06:14:37.076057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.223 [2024-11-27 06:14:37.076106] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.223 [2024-11-27 06:14:37.076113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.223 [2024-11-27 06:14:37.076117] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.223 [2024-11-27 06:14:37.076122] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.223 [2024-11-27 06:14:37.080161] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.223 [2024-11-27 06:14:37.080186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.223 [2024-11-27 06:14:37.080193] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f6750) 00:19:32.223 [2024-11-27 06:14:37.080206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.223 [2024-11-27 06:14:37.080241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a5abc0, cid 3, qid 0 00:19:32.223 [2024-11-27 06:14:37.080305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.223 [2024-11-27 06:14:37.080318] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.223 [2024-11-27 06:14:37.080322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.223 [2024-11-27 06:14:37.080327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a5abc0) on tqpair=0x19f6750 00:19:32.223 [2024-11-27 06:14:37.080337] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:19:32.223 0% 00:19:32.223 Data Units Read: 0 00:19:32.223 Data Units Written: 0 00:19:32.223 Host Read Commands: 0 00:19:32.223 Host Write Commands: 0 00:19:32.223 Controller Busy Time: 0 minutes 00:19:32.223 Power Cycles: 0 00:19:32.223 Power On Hours: 0 hours 00:19:32.223 Unsafe Shutdowns: 0 00:19:32.223 Unrecoverable Media Errors: 0 00:19:32.223 Lifetime Error Log Entries: 0 00:19:32.223 Warning Temperature Time: 0 minutes 00:19:32.223 Critical Temperature Time: 0 minutes 00:19:32.223 00:19:32.223 Number of Queues 00:19:32.223 ================ 00:19:32.223 Number of I/O Submission Queues: 127 00:19:32.223 Number of I/O Completion Queues: 127 00:19:32.223 00:19:32.223 Active Namespaces 00:19:32.223 ================= 00:19:32.223 Namespace ID:1 00:19:32.223 Error Recovery Timeout: Unlimited 00:19:32.223 Command Set Identifier: NVM (00h) 00:19:32.223 Deallocate: Supported 00:19:32.223 Deallocated/Unwritten Error: Not Supported 00:19:32.223 Deallocated Read Value: Unknown 00:19:32.223 Deallocate in Write Zeroes: Not Supported 00:19:32.223 Deallocated Guard Field: 0xFFFF 00:19:32.223 Flush: Supported 00:19:32.223 Reservation: Supported 00:19:32.223 Namespace Sharing Capabilities: Multiple Controllers 00:19:32.223 Size (in LBAs): 131072 (0GiB) 00:19:32.223 Capacity (in LBAs): 131072 (0GiB) 00:19:32.223 Utilization (in LBAs): 131072 (0GiB) 00:19:32.223 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:32.223 EUI64: ABCDEF0123456789 00:19:32.223 UUID: b61b7aa0-8be1-450a-a012-eb411738da6a 00:19:32.223 Thin Provisioning: Not Supported 00:19:32.223 Per-NS Atomic Units: Yes 00:19:32.223 Atomic Boundary Size (Normal): 0 00:19:32.223 Atomic Boundary Size (PFail): 0 00:19:32.223 Atomic Boundary Offset: 0 00:19:32.223 Maximum Single Source Range Length: 65535 00:19:32.223 Maximum Copy Length: 65535 00:19:32.223 Maximum Source Range Count: 1 00:19:32.223 NGUID/EUI64 Never Reused: No 00:19:32.223 Namespace Write Protected: No 00:19:32.223 Number of LBA Formats: 1 00:19:32.223 Current LBA Format: LBA Format #00 00:19:32.223 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:32.223 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:32.223 rmmod nvme_tcp 00:19:32.223 rmmod nvme_fabrics 00:19:32.223 rmmod nvme_keyring 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74439 ']' 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74439 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 74439 ']' 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 74439 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74439 00:19:32.223 killing process with pid 74439 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74439' 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 74439 00:19:32.223 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 74439 00:19:32.482 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:32.482 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:32.482 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:32.482 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:19:32.482 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:32.482 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:19:32.482 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:19:32.482 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:32.482 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:32.482 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:32.482 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:32.482 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:32.482 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:32.482 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:32.482 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:32.482 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:32.482 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:32.482 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:32.741 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:32.741 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:32.741 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:32.741 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:32.741 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:32.741 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.741 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.741 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.741 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:19:32.741 00:19:32.741 real 0m2.937s 00:19:32.741 user 0m7.480s 00:19:32.741 sys 0m0.797s 00:19:32.741 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:32.741 06:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:32.741 ************************************ 00:19:32.741 END TEST nvmf_identify 00:19:32.741 ************************************ 00:19:32.741 06:14:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:32.741 06:14:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:32.741 06:14:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:32.741 06:14:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.741 ************************************ 00:19:32.741 START TEST nvmf_perf 00:19:32.741 ************************************ 00:19:32.741 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:33.000 * Looking for test storage... 00:19:33.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:19:33.000 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:33.001 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:33.001 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:33.001 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:19:33.001 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:33.001 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:33.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.001 --rc genhtml_branch_coverage=1 00:19:33.001 --rc genhtml_function_coverage=1 00:19:33.001 --rc genhtml_legend=1 00:19:33.001 --rc geninfo_all_blocks=1 00:19:33.001 --rc geninfo_unexecuted_blocks=1 00:19:33.001 00:19:33.001 ' 00:19:33.001 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:33.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.001 --rc genhtml_branch_coverage=1 00:19:33.001 --rc genhtml_function_coverage=1 00:19:33.001 --rc genhtml_legend=1 00:19:33.001 --rc geninfo_all_blocks=1 00:19:33.001 --rc geninfo_unexecuted_blocks=1 00:19:33.001 00:19:33.001 ' 00:19:33.001 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:33.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.001 --rc genhtml_branch_coverage=1 00:19:33.001 --rc genhtml_function_coverage=1 00:19:33.001 --rc genhtml_legend=1 00:19:33.001 --rc geninfo_all_blocks=1 00:19:33.001 --rc geninfo_unexecuted_blocks=1 00:19:33.001 00:19:33.001 ' 00:19:33.001 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:33.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.001 --rc genhtml_branch_coverage=1 00:19:33.001 --rc genhtml_function_coverage=1 00:19:33.001 --rc genhtml_legend=1 00:19:33.001 --rc geninfo_all_blocks=1 00:19:33.001 --rc geninfo_unexecuted_blocks=1 00:19:33.001 00:19:33.001 ' 00:19:33.001 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:33.001 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:19:33.001 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.001 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.001 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.001 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.001 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.001 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.001 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.001 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.001 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.001 06:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:33.001 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:33.001 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:33.002 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:33.002 Cannot find device "nvmf_init_br" 00:19:33.002 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:19:33.002 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:33.002 Cannot find device "nvmf_init_br2" 00:19:33.002 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:19:33.002 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:33.002 Cannot find device "nvmf_tgt_br" 00:19:33.002 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:19:33.002 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:33.002 Cannot find device "nvmf_tgt_br2" 00:19:33.002 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:19:33.002 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:33.002 Cannot find device "nvmf_init_br" 00:19:33.002 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:19:33.002 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:33.260 Cannot find device "nvmf_init_br2" 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:33.260 Cannot find device "nvmf_tgt_br" 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:33.260 Cannot find device "nvmf_tgt_br2" 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:33.260 Cannot find device "nvmf_br" 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:33.260 Cannot find device "nvmf_init_if" 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:33.260 Cannot find device "nvmf_init_if2" 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:33.260 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:33.260 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:33.260 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:33.519 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:33.519 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:19:33.519 00:19:33.519 --- 10.0.0.3 ping statistics --- 00:19:33.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.519 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:33.519 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:33.519 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:19:33.519 00:19:33.519 --- 10.0.0.4 ping statistics --- 00:19:33.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.519 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:33.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:33.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:33.519 00:19:33.519 --- 10.0.0.1 ping statistics --- 00:19:33.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.519 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:33.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:33.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:19:33.519 00:19:33.519 --- 10.0.0.2 ping statistics --- 00:19:33.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.519 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74705 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74705 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74705 ']' 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.519 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:33.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.520 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.520 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:33.520 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:33.520 [2024-11-27 06:14:38.541964] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:19:33.520 [2024-11-27 06:14:38.542072] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.779 [2024-11-27 06:14:38.688736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:33.779 [2024-11-27 06:14:38.746982] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.779 [2024-11-27 06:14:38.747053] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.779 [2024-11-27 06:14:38.747064] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:33.779 [2024-11-27 06:14:38.747079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:33.779 [2024-11-27 06:14:38.747086] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.779 [2024-11-27 06:14:38.748322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.779 [2024-11-27 06:14:38.748861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.779 [2024-11-27 06:14:38.749190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.779 [2024-11-27 06:14:38.749724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:33.779 [2024-11-27 06:14:38.806865] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:34.038 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.038 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:19:34.038 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:34.038 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:34.038 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:34.038 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.038 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:19:34.038 06:14:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:34.605 06:14:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:19:34.605 06:14:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:19:34.605 06:14:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:19:34.865 06:14:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:35.125 06:14:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:19:35.125 06:14:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:19:35.125 06:14:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:19:35.125 06:14:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:19:35.125 06:14:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:35.384 [2024-11-27 06:14:40.245621] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.384 06:14:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:35.642 06:14:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:35.642 06:14:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:35.902 06:14:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:35.902 06:14:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:19:36.161 06:14:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:36.420 [2024-11-27 06:14:41.436086] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:36.420 06:14:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:19:36.680 06:14:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:36.680 06:14:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:36.680 06:14:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:19:36.680 06:14:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:38.058 Initializing NVMe Controllers 00:19:38.058 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:38.058 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:38.058 Initialization complete. Launching workers. 00:19:38.058 ======================================================== 00:19:38.058 Latency(us) 00:19:38.058 Device Information : IOPS MiB/s Average min max 00:19:38.058 PCIE (0000:00:10.0) NSID 1 from core 0: 20711.94 80.91 1544.50 298.05 9243.53 00:19:38.058 ======================================================== 00:19:38.058 Total : 20711.94 80.91 1544.50 298.05 9243.53 00:19:38.058 00:19:38.058 06:14:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:39.453 Initializing NVMe Controllers 00:19:39.453 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:39.453 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:39.453 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:39.453 Initialization complete. Launching workers. 00:19:39.453 ======================================================== 00:19:39.453 Latency(us) 00:19:39.453 Device Information : IOPS MiB/s Average min max 00:19:39.453 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3418.96 13.36 292.15 102.75 7150.82 00:19:39.453 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8096.37 7017.05 12057.84 00:19:39.453 ======================================================== 00:19:39.453 Total : 3542.96 13.84 565.29 102.75 12057.84 00:19:39.453 00:19:39.453 06:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:40.829 Initializing NVMe Controllers 00:19:40.829 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:40.829 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:40.829 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:40.829 Initialization complete. Launching workers. 00:19:40.829 ======================================================== 00:19:40.829 Latency(us) 00:19:40.829 Device Information : IOPS MiB/s Average min max 00:19:40.829 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9033.44 35.29 3544.33 612.77 9294.37 00:19:40.829 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3747.77 14.64 8596.12 6157.64 16124.56 00:19:40.829 ======================================================== 00:19:40.829 Total : 12781.21 49.93 5025.64 612.77 16124.56 00:19:40.829 00:19:40.829 06:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:19:40.829 06:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:43.364 Initializing NVMe Controllers 00:19:43.364 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:43.364 Controller IO queue size 128, less than required. 00:19:43.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:43.364 Controller IO queue size 128, less than required. 00:19:43.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:43.364 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:43.364 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:43.364 Initialization complete. Launching workers. 00:19:43.364 ======================================================== 00:19:43.364 Latency(us) 00:19:43.364 Device Information : IOPS MiB/s Average min max 00:19:43.364 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1640.70 410.17 79703.44 40639.00 137616.16 00:19:43.364 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 669.17 167.29 201568.78 77207.52 318443.23 00:19:43.364 ======================================================== 00:19:43.364 Total : 2309.87 577.47 115008.00 40639.00 318443.23 00:19:43.364 00:19:43.364 06:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:19:43.623 Initializing NVMe Controllers 00:19:43.623 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:43.623 Controller IO queue size 128, less than required. 00:19:43.623 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:43.623 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:19:43.623 Controller IO queue size 128, less than required. 00:19:43.623 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:43.623 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:19:43.623 WARNING: Some requested NVMe devices were skipped 00:19:43.623 No valid NVMe controllers or AIO or URING devices found 00:19:43.623 06:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:19:46.161 Initializing NVMe Controllers 00:19:46.161 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:46.161 Controller IO queue size 128, less than required. 00:19:46.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:46.161 Controller IO queue size 128, less than required. 00:19:46.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:46.161 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:46.161 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:46.161 Initialization complete. Launching workers. 00:19:46.161 00:19:46.161 ==================== 00:19:46.161 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:19:46.161 TCP transport: 00:19:46.161 polls: 9041 00:19:46.161 idle_polls: 5627 00:19:46.161 sock_completions: 3414 00:19:46.161 nvme_completions: 5871 00:19:46.161 submitted_requests: 8778 00:19:46.161 queued_requests: 1 00:19:46.161 00:19:46.161 ==================== 00:19:46.161 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:19:46.161 TCP transport: 00:19:46.161 polls: 11407 00:19:46.161 idle_polls: 7696 00:19:46.161 sock_completions: 3711 00:19:46.161 nvme_completions: 5817 00:19:46.161 submitted_requests: 8706 00:19:46.161 queued_requests: 1 00:19:46.161 ======================================================== 00:19:46.161 Latency(us) 00:19:46.161 Device Information : IOPS MiB/s Average min max 00:19:46.161 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1467.43 366.86 89299.19 45456.54 139293.60 00:19:46.161 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1453.93 363.48 89402.56 31178.66 152647.71 00:19:46.161 ======================================================== 00:19:46.161 Total : 2921.37 730.34 89350.64 31178.66 152647.71 00:19:46.161 00:19:46.161 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:19:46.161 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:46.420 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:19:46.420 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:19:46.420 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:19:46.420 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:46.420 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:19:46.420 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:46.420 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:19:46.420 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:46.420 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:46.679 rmmod nvme_tcp 00:19:46.679 rmmod nvme_fabrics 00:19:46.679 rmmod nvme_keyring 00:19:46.679 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:46.679 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:19:46.679 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:19:46.679 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74705 ']' 00:19:46.679 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74705 00:19:46.679 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74705 ']' 00:19:46.679 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74705 00:19:46.679 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:19:46.679 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.679 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74705 00:19:46.679 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:46.679 killing process with pid 74705 00:19:46.679 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:46.679 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74705' 00:19:46.679 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74705 00:19:46.679 06:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74705 00:19:47.247 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:47.247 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:47.247 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:47.247 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:19:47.247 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:19:47.247 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:19:47.247 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:47.247 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:47.247 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:47.247 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:47.247 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:47.247 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:47.247 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:47.512 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:47.512 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:47.512 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:47.512 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:47.512 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:47.512 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:47.512 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:47.512 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:47.512 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:47.512 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:47.512 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.512 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.512 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.512 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:19:47.512 00:19:47.512 real 0m14.725s 00:19:47.512 user 0m53.274s 00:19:47.512 sys 0m4.145s 00:19:47.512 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:47.513 06:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:47.513 ************************************ 00:19:47.513 END TEST nvmf_perf 00:19:47.513 ************************************ 00:19:47.513 06:14:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:47.513 06:14:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:47.513 06:14:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:47.513 06:14:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.513 ************************************ 00:19:47.513 START TEST nvmf_fio_host 00:19:47.513 ************************************ 00:19:47.513 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:47.776 * Looking for test storage... 00:19:47.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:47.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.776 --rc genhtml_branch_coverage=1 00:19:47.776 --rc genhtml_function_coverage=1 00:19:47.776 --rc genhtml_legend=1 00:19:47.776 --rc geninfo_all_blocks=1 00:19:47.776 --rc geninfo_unexecuted_blocks=1 00:19:47.776 00:19:47.776 ' 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:47.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.776 --rc genhtml_branch_coverage=1 00:19:47.776 --rc genhtml_function_coverage=1 00:19:47.776 --rc genhtml_legend=1 00:19:47.776 --rc geninfo_all_blocks=1 00:19:47.776 --rc geninfo_unexecuted_blocks=1 00:19:47.776 00:19:47.776 ' 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:47.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.776 --rc genhtml_branch_coverage=1 00:19:47.776 --rc genhtml_function_coverage=1 00:19:47.776 --rc genhtml_legend=1 00:19:47.776 --rc geninfo_all_blocks=1 00:19:47.776 --rc geninfo_unexecuted_blocks=1 00:19:47.776 00:19:47.776 ' 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:47.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.776 --rc genhtml_branch_coverage=1 00:19:47.776 --rc genhtml_function_coverage=1 00:19:47.776 --rc genhtml_legend=1 00:19:47.776 --rc geninfo_all_blocks=1 00:19:47.776 --rc geninfo_unexecuted_blocks=1 00:19:47.776 00:19:47.776 ' 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:19:47.776 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:47.777 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:47.777 Cannot find device "nvmf_init_br" 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:47.777 Cannot find device "nvmf_init_br2" 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:47.777 Cannot find device "nvmf_tgt_br" 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:19:47.777 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:47.777 Cannot find device "nvmf_tgt_br2" 00:19:48.036 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:19:48.036 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:48.036 Cannot find device "nvmf_init_br" 00:19:48.036 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:19:48.036 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:48.036 Cannot find device "nvmf_init_br2" 00:19:48.036 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:19:48.036 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:48.036 Cannot find device "nvmf_tgt_br" 00:19:48.036 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:19:48.036 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:48.036 Cannot find device "nvmf_tgt_br2" 00:19:48.036 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:19:48.036 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:48.036 Cannot find device "nvmf_br" 00:19:48.036 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:19:48.036 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:48.036 Cannot find device "nvmf_init_if" 00:19:48.036 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:19:48.036 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:48.036 Cannot find device "nvmf_init_if2" 00:19:48.036 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:19:48.036 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:48.036 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:48.036 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:19:48.036 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:48.036 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:48.036 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:19:48.036 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:48.036 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:48.036 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:48.036 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:48.036 06:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:48.036 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:48.036 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:48.036 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:48.036 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:48.036 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:48.036 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:48.036 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:48.036 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:48.036 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:48.036 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:48.036 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:48.036 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:48.036 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:48.295 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:48.295 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:19:48.295 00:19:48.295 --- 10.0.0.3 ping statistics --- 00:19:48.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.295 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:48.295 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:48.295 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:19:48.295 00:19:48.295 --- 10.0.0.4 ping statistics --- 00:19:48.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.295 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:48.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:48.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:48.295 00:19:48.295 --- 10.0.0.1 ping statistics --- 00:19:48.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.295 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:48.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:48.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:19:48.295 00:19:48.295 --- 10.0.0.2 ping statistics --- 00:19:48.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.295 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:48.295 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.296 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75166 00:19:48.296 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:48.296 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:48.296 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75166 00:19:48.296 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 75166 ']' 00:19:48.296 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.296 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.296 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.296 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.296 06:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.296 [2024-11-27 06:14:53.345009] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:19:48.296 [2024-11-27 06:14:53.345114] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.555 [2024-11-27 06:14:53.502891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:48.555 [2024-11-27 06:14:53.568496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.555 [2024-11-27 06:14:53.568567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.555 [2024-11-27 06:14:53.568581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.555 [2024-11-27 06:14:53.568592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.555 [2024-11-27 06:14:53.568602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.555 [2024-11-27 06:14:53.569885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.555 [2024-11-27 06:14:53.570094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.555 [2024-11-27 06:14:53.571051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:48.555 [2024-11-27 06:14:53.571098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.555 [2024-11-27 06:14:53.630511] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:49.490 06:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.490 06:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:19:49.490 06:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:49.761 [2024-11-27 06:14:54.592279] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.761 06:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:19:49.761 06:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:49.761 06:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.761 06:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:50.072 Malloc1 00:19:50.072 06:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:50.333 06:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:50.593 06:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:50.593 [2024-11-27 06:14:55.655943] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:50.593 06:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:19:51.162 06:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:19:51.162 06:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:51.162 06:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:51.162 06:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:51.162 06:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:51.162 06:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:51.162 06:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:51.162 06:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:19:51.162 06:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:51.162 06:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:51.162 06:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:51.162 06:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:19:51.162 06:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:51.162 06:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:51.162 06:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:51.162 06:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:51.162 06:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:51.162 06:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:51.162 06:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:51.162 06:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:51.162 06:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:51.162 06:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:51.162 06:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:51.162 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:51.162 fio-3.35 00:19:51.162 Starting 1 thread 00:19:53.723 00:19:53.723 test: (groupid=0, jobs=1): err= 0: pid=75248: Wed Nov 27 06:14:58 2024 00:19:53.723 read: IOPS=8728, BW=34.1MiB/s (35.8MB/s)(68.4MiB/2006msec) 00:19:53.723 slat (nsec): min=1721, max=336949, avg=2303.97, stdev=3478.56 00:19:53.723 clat (usec): min=1981, max=14203, avg=7638.30, stdev=826.06 00:19:53.723 lat (usec): min=2016, max=14206, avg=7640.60, stdev=825.85 00:19:53.723 clat percentiles (usec): 00:19:53.723 | 1.00th=[ 6128], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7046], 00:19:53.723 | 30.00th=[ 7242], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7701], 00:19:53.723 | 70.00th=[ 7898], 80.00th=[ 8160], 90.00th=[ 8586], 95.00th=[ 8979], 00:19:53.723 | 99.00th=[10028], 99.50th=[11600], 99.90th=[13566], 99.95th=[13829], 00:19:53.723 | 99.99th=[14091] 00:19:53.723 bw ( KiB/s): min=33640, max=35680, per=99.95%, avg=34898.00, stdev=955.33, samples=4 00:19:53.723 iops : min= 8410, max= 8920, avg=8724.50, stdev=238.83, samples=4 00:19:53.723 write: IOPS=8727, BW=34.1MiB/s (35.7MB/s)(68.4MiB/2006msec); 0 zone resets 00:19:53.723 slat (nsec): min=1787, max=174927, avg=2425.58, stdev=2963.42 00:19:53.723 clat (usec): min=1879, max=13661, avg=6960.10, stdev=783.15 00:19:53.723 lat (usec): min=1893, max=13663, avg=6962.53, stdev=783.14 00:19:53.723 clat percentiles (usec): 00:19:53.723 | 1.00th=[ 5538], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6456], 00:19:53.723 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6980], 00:19:53.723 | 70.00th=[ 7177], 80.00th=[ 7439], 90.00th=[ 7832], 95.00th=[ 8094], 00:19:53.723 | 99.00th=[ 9372], 99.50th=[10683], 99.90th=[13042], 99.95th=[13304], 00:19:53.723 | 99.99th=[13698] 00:19:53.723 bw ( KiB/s): min=34368, max=35904, per=99.92%, avg=34882.00, stdev=694.00, samples=4 00:19:53.723 iops : min= 8592, max= 8976, avg=8720.50, stdev=173.50, samples=4 00:19:53.723 lat (msec) : 2=0.01%, 4=0.20%, 10=98.95%, 20=0.84% 00:19:53.723 cpu : usr=67.88%, sys=24.24%, ctx=639, majf=0, minf=7 00:19:53.723 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:53.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:53.723 issued rwts: total=17510,17507,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.723 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:53.723 00:19:53.723 Run status group 0 (all jobs): 00:19:53.723 READ: bw=34.1MiB/s (35.8MB/s), 34.1MiB/s-34.1MiB/s (35.8MB/s-35.8MB/s), io=68.4MiB (71.7MB), run=2006-2006msec 00:19:53.723 WRITE: bw=34.1MiB/s (35.7MB/s), 34.1MiB/s-34.1MiB/s (35.7MB/s-35.7MB/s), io=68.4MiB (71.7MB), run=2006-2006msec 00:19:53.723 06:14:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:19:53.724 06:14:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:19:53.724 06:14:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:53.724 06:14:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:53.724 06:14:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:53.724 06:14:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:53.724 06:14:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:19:53.724 06:14:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:53.724 06:14:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:53.724 06:14:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:19:53.724 06:14:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:53.724 06:14:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:53.724 06:14:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:53.724 06:14:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:53.724 06:14:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:53.724 06:14:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:53.724 06:14:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:53.724 06:14:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:53.724 06:14:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:53.724 06:14:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:53.724 06:14:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:53.724 06:14:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:19:53.724 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:19:53.724 fio-3.35 00:19:53.724 Starting 1 thread 00:19:56.276 00:19:56.276 test: (groupid=0, jobs=1): err= 0: pid=75291: Wed Nov 27 06:15:01 2024 00:19:56.276 read: IOPS=7947, BW=124MiB/s (130MB/s)(249MiB/2008msec) 00:19:56.276 slat (usec): min=2, max=140, avg= 3.53, stdev= 2.63 00:19:56.276 clat (usec): min=2248, max=17496, avg=8943.33, stdev=2585.39 00:19:56.276 lat (usec): min=2251, max=17499, avg=8946.86, stdev=2585.45 00:19:56.276 clat percentiles (usec): 00:19:56.276 | 1.00th=[ 4359], 5.00th=[ 5145], 10.00th=[ 5735], 20.00th=[ 6718], 00:19:56.276 | 30.00th=[ 7439], 40.00th=[ 8029], 50.00th=[ 8717], 60.00th=[ 9372], 00:19:56.276 | 70.00th=[10028], 80.00th=[10945], 90.00th=[12387], 95.00th=[13960], 00:19:56.276 | 99.00th=[15795], 99.50th=[16057], 99.90th=[16581], 99.95th=[16909], 00:19:56.276 | 99.99th=[17171] 00:19:56.276 bw ( KiB/s): min=56544, max=74880, per=52.52%, avg=66784.00, stdev=7650.30, samples=4 00:19:56.276 iops : min= 3534, max= 4680, avg=4174.00, stdev=478.14, samples=4 00:19:56.276 write: IOPS=4811, BW=75.2MiB/s (78.8MB/s)(137MiB/1824msec); 0 zone resets 00:19:56.276 slat (usec): min=30, max=409, avg=36.02, stdev=10.65 00:19:56.276 clat (usec): min=3800, max=20742, avg=12436.27, stdev=2406.31 00:19:56.276 lat (usec): min=3864, max=20773, avg=12472.29, stdev=2407.99 00:19:56.276 clat percentiles (usec): 00:19:56.276 | 1.00th=[ 7767], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10290], 00:19:56.276 | 30.00th=[10945], 40.00th=[11600], 50.00th=[12256], 60.00th=[12911], 00:19:56.276 | 70.00th=[13698], 80.00th=[14484], 90.00th=[15664], 95.00th=[16712], 00:19:56.276 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19792], 99.95th=[20055], 00:19:56.276 | 99.99th=[20841] 00:19:56.276 bw ( KiB/s): min=60032, max=78848, per=90.87%, avg=69952.00, stdev=7761.52, samples=4 00:19:56.276 iops : min= 3752, max= 4928, avg=4372.00, stdev=485.10, samples=4 00:19:56.276 lat (msec) : 4=0.33%, 10=50.65%, 20=49.00%, 50=0.01% 00:19:56.276 cpu : usr=79.82%, sys=15.84%, ctx=4, majf=0, minf=18 00:19:56.276 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:19:56.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:56.276 issued rwts: total=15958,8776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.276 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:56.276 00:19:56.276 Run status group 0 (all jobs): 00:19:56.276 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=249MiB (261MB), run=2008-2008msec 00:19:56.276 WRITE: bw=75.2MiB/s (78.8MB/s), 75.2MiB/s-75.2MiB/s (78.8MB/s-78.8MB/s), io=137MiB (144MB), run=1824-1824msec 00:19:56.276 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:56.276 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:19:56.276 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:19:56.276 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:19:56.276 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:19:56.276 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:56.276 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:19:56.535 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:56.535 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:19:56.535 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:56.535 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:56.535 rmmod nvme_tcp 00:19:56.535 rmmod nvme_fabrics 00:19:56.535 rmmod nvme_keyring 00:19:56.535 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:56.535 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:19:56.535 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:19:56.535 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 75166 ']' 00:19:56.535 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 75166 00:19:56.535 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 75166 ']' 00:19:56.535 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 75166 00:19:56.535 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:19:56.535 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.535 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75166 00:19:56.535 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:56.535 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:56.535 killing process with pid 75166 00:19:56.535 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75166' 00:19:56.535 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 75166 00:19:56.535 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 75166 00:19:56.793 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:56.793 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:56.793 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:56.793 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:19:56.793 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:19:56.793 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:56.793 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:19:56.794 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:56.794 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:56.794 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:56.794 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:56.794 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:56.794 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:56.794 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:56.794 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:56.794 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:56.794 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:56.794 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:56.794 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:57.053 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:57.053 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:57.053 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:57.053 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:57.053 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.053 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:57.053 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.053 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:19:57.053 00:19:57.053 real 0m9.400s 00:19:57.053 user 0m36.970s 00:19:57.053 sys 0m2.587s 00:19:57.053 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:57.053 06:15:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.053 ************************************ 00:19:57.053 END TEST nvmf_fio_host 00:19:57.053 ************************************ 00:19:57.053 06:15:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:57.053 06:15:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:57.053 06:15:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:57.053 06:15:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.053 ************************************ 00:19:57.053 START TEST nvmf_failover 00:19:57.053 ************************************ 00:19:57.053 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:57.053 * Looking for test storage... 00:19:57.053 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:57.053 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:57.053 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:19:57.053 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:57.313 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:57.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.313 --rc genhtml_branch_coverage=1 00:19:57.313 --rc genhtml_function_coverage=1 00:19:57.313 --rc genhtml_legend=1 00:19:57.314 --rc geninfo_all_blocks=1 00:19:57.314 --rc geninfo_unexecuted_blocks=1 00:19:57.314 00:19:57.314 ' 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:57.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.314 --rc genhtml_branch_coverage=1 00:19:57.314 --rc genhtml_function_coverage=1 00:19:57.314 --rc genhtml_legend=1 00:19:57.314 --rc geninfo_all_blocks=1 00:19:57.314 --rc geninfo_unexecuted_blocks=1 00:19:57.314 00:19:57.314 ' 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:57.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.314 --rc genhtml_branch_coverage=1 00:19:57.314 --rc genhtml_function_coverage=1 00:19:57.314 --rc genhtml_legend=1 00:19:57.314 --rc geninfo_all_blocks=1 00:19:57.314 --rc geninfo_unexecuted_blocks=1 00:19:57.314 00:19:57.314 ' 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:57.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.314 --rc genhtml_branch_coverage=1 00:19:57.314 --rc genhtml_function_coverage=1 00:19:57.314 --rc genhtml_legend=1 00:19:57.314 --rc geninfo_all_blocks=1 00:19:57.314 --rc geninfo_unexecuted_blocks=1 00:19:57.314 00:19:57.314 ' 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:57.314 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:57.314 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:57.315 Cannot find device "nvmf_init_br" 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:57.315 Cannot find device "nvmf_init_br2" 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:57.315 Cannot find device "nvmf_tgt_br" 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:57.315 Cannot find device "nvmf_tgt_br2" 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:57.315 Cannot find device "nvmf_init_br" 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:57.315 Cannot find device "nvmf_init_br2" 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:57.315 Cannot find device "nvmf_tgt_br" 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:57.315 Cannot find device "nvmf_tgt_br2" 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:57.315 Cannot find device "nvmf_br" 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:57.315 Cannot find device "nvmf_init_if" 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:57.315 Cannot find device "nvmf_init_if2" 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:57.315 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:19:57.315 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:57.575 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:57.575 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:19:57.575 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:57.575 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:57.575 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:57.575 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:57.575 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:57.575 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:57.575 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:57.575 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:57.575 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:57.575 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:57.575 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:57.575 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:57.575 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:57.575 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:57.575 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:57.575 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:57.575 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:57.576 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:57.576 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:19:57.576 00:19:57.576 --- 10.0.0.3 ping statistics --- 00:19:57.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.576 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:57.576 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:57.576 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:19:57.576 00:19:57.576 --- 10.0.0.4 ping statistics --- 00:19:57.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.576 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:57.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:57.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:57.576 00:19:57.576 --- 10.0.0.1 ping statistics --- 00:19:57.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.576 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:57.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:57.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:19:57.576 00:19:57.576 --- 10.0.0.2 ping statistics --- 00:19:57.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.576 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:57.576 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:57.835 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:57.835 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:57.835 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:57.835 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:57.835 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75561 00:19:57.835 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:57.835 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75561 00:19:57.835 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75561 ']' 00:19:57.835 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.835 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.835 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.835 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.835 06:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:57.835 [2024-11-27 06:15:02.759250] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:19:57.836 [2024-11-27 06:15:02.759359] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.836 [2024-11-27 06:15:02.915344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:58.095 [2024-11-27 06:15:02.971183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.095 [2024-11-27 06:15:02.971264] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.095 [2024-11-27 06:15:02.971283] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.095 [2024-11-27 06:15:02.971294] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.095 [2024-11-27 06:15:02.971304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.095 [2024-11-27 06:15:02.972649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.095 [2024-11-27 06:15:02.972759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:58.095 [2024-11-27 06:15:02.972769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.095 [2024-11-27 06:15:03.033210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:58.095 06:15:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.095 06:15:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:19:58.095 06:15:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:58.095 06:15:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:58.095 06:15:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:58.095 06:15:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.095 06:15:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:58.354 [2024-11-27 06:15:03.441126] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.613 06:15:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:58.872 Malloc0 00:19:58.872 06:15:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:59.131 06:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:59.390 06:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:59.390 [2024-11-27 06:15:04.471341] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:59.653 06:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:59.653 [2024-11-27 06:15:04.695465] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:59.653 06:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:59.926 [2024-11-27 06:15:04.931889] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:59.926 06:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75607 00:19:59.926 06:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:59.926 06:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:59.926 06:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75607 /var/tmp/bdevperf.sock 00:19:59.926 06:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75607 ']' 00:19:59.926 06:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:59.926 06:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:59.926 06:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:59.926 06:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.926 06:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:01.303 06:15:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.303 06:15:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:20:01.303 06:15:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:01.303 NVMe0n1 00:20:01.303 06:15:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:01.562 00:20:01.562 06:15:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75636 00:20:01.562 06:15:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:01.562 06:15:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:20:02.939 06:15:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:02.939 06:15:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:20:06.238 06:15:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:06.238 00:20:06.499 06:15:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:06.758 06:15:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:20:10.047 06:15:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:10.047 [2024-11-27 06:15:14.948029] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:10.047 06:15:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:20:11.096 06:15:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:20:11.355 06:15:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75636 00:20:17.927 { 00:20:17.928 "results": [ 00:20:17.928 { 00:20:17.928 "job": "NVMe0n1", 00:20:17.928 "core_mask": "0x1", 00:20:17.928 "workload": "verify", 00:20:17.928 "status": "finished", 00:20:17.928 "verify_range": { 00:20:17.928 "start": 0, 00:20:17.928 "length": 16384 00:20:17.928 }, 00:20:17.928 "queue_depth": 128, 00:20:17.928 "io_size": 4096, 00:20:17.928 "runtime": 15.010468, 00:20:17.928 "iops": 8674.146602224528, 00:20:17.928 "mibps": 33.88338516493956, 00:20:17.928 "io_failed": 3469, 00:20:17.928 "io_timeout": 0, 00:20:17.928 "avg_latency_us": 14342.638932311927, 00:20:17.928 "min_latency_us": 606.9527272727273, 00:20:17.928 "max_latency_us": 25022.836363636365 00:20:17.928 } 00:20:17.928 ], 00:20:17.928 "core_count": 1 00:20:17.928 } 00:20:17.928 06:15:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75607 00:20:17.928 06:15:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75607 ']' 00:20:17.928 06:15:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75607 00:20:17.928 06:15:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:20:17.928 06:15:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:17.928 06:15:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75607 00:20:17.928 killing process with pid 75607 00:20:17.928 06:15:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:17.928 06:15:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:17.928 06:15:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75607' 00:20:17.928 06:15:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75607 00:20:17.928 06:15:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75607 00:20:17.928 06:15:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:17.928 [2024-11-27 06:15:05.020666] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:20:17.928 [2024-11-27 06:15:05.020833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75607 ] 00:20:17.928 [2024-11-27 06:15:05.174144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.928 [2024-11-27 06:15:05.235577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.928 [2024-11-27 06:15:05.297877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:17.928 Running I/O for 15 seconds... 00:20:17.928 9360.00 IOPS, 36.56 MiB/s [2024-11-27T06:15:23.025Z] [2024-11-27 06:15:07.908049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.928 [2024-11-27 06:15:07.908592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.908712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.928 [2024-11-27 06:15:07.908812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.908929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.928 [2024-11-27 06:15:07.909000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.909062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.928 [2024-11-27 06:15:07.909133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.909220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.928 [2024-11-27 06:15:07.909327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.909394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.928 [2024-11-27 06:15:07.909467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.909544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.928 [2024-11-27 06:15:07.909615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.909696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.928 [2024-11-27 06:15:07.909775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.909851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.928 [2024-11-27 06:15:07.909925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.909985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.928 [2024-11-27 06:15:07.910058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.910122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.928 [2024-11-27 06:15:07.910311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.910398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.928 [2024-11-27 06:15:07.910486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.910564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.928 [2024-11-27 06:15:07.910657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.910727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.928 [2024-11-27 06:15:07.910847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.910915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.928 [2024-11-27 06:15:07.910988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.911056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.928 [2024-11-27 06:15:07.911159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.911243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.928 [2024-11-27 06:15:07.911328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.911439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.928 [2024-11-27 06:15:07.911528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.911602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.928 [2024-11-27 06:15:07.911675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.911745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.928 [2024-11-27 06:15:07.911829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.911909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.928 [2024-11-27 06:15:07.911976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.912044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.928 [2024-11-27 06:15:07.912110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.912176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.928 [2024-11-27 06:15:07.912263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.912341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.928 [2024-11-27 06:15:07.912410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.912511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.928 [2024-11-27 06:15:07.912608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.912682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.928 [2024-11-27 06:15:07.912751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.912811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.928 [2024-11-27 06:15:07.912914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.928 [2024-11-27 06:15:07.912973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.928 [2024-11-27 06:15:07.913038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.913096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-11-27 06:15:07.913171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.913250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-11-27 06:15:07.913324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.913383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-11-27 06:15:07.913452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.913527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-11-27 06:15:07.913600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.913666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.929 [2024-11-27 06:15:07.913742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.913810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.929 [2024-11-27 06:15:07.913893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.913949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.929 [2024-11-27 06:15:07.914020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.914099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.929 [2024-11-27 06:15:07.914288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.914363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.929 [2024-11-27 06:15:07.914445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.914513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.929 [2024-11-27 06:15:07.914634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.914712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.929 [2024-11-27 06:15:07.914810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.914878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.929 [2024-11-27 06:15:07.914955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.915041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.929 [2024-11-27 06:15:07.915109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.915183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.929 [2024-11-27 06:15:07.915286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.915360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.929 [2024-11-27 06:15:07.915436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.915520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.929 [2024-11-27 06:15:07.915594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.915668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.929 [2024-11-27 06:15:07.915743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.915816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.929 [2024-11-27 06:15:07.915899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.915966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.929 [2024-11-27 06:15:07.916063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.916145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.929 [2024-11-27 06:15:07.916231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.916291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-11-27 06:15:07.916371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.916431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-11-27 06:15:07.916528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.916603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-11-27 06:15:07.916675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.916749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-11-27 06:15:07.916822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.916927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-11-27 06:15:07.917000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.917068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-11-27 06:15:07.917140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.917214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-11-27 06:15:07.917291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.917359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-11-27 06:15:07.917431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.917514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-11-27 06:15:07.917607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.917679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-11-27 06:15:07.917752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.917821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-11-27 06:15:07.917909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.917969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-11-27 06:15:07.918033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.918100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-11-27 06:15:07.918268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.918367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-11-27 06:15:07.918459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.918590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-11-27 06:15:07.918667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.918741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-11-27 06:15:07.918807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.918865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.929 [2024-11-27 06:15:07.918931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.918989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.929 [2024-11-27 06:15:07.919060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.919119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.929 [2024-11-27 06:15:07.919206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.929 [2024-11-27 06:15:07.919272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.929 [2024-11-27 06:15:07.919342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.919410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.919478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.919545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.919620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.919700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.919770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.919830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.919889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.919946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.930 [2024-11-27 06:15:07.920015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.920086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.930 [2024-11-27 06:15:07.920184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.920248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.930 [2024-11-27 06:15:07.920319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.920387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.930 [2024-11-27 06:15:07.920457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.920541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.930 [2024-11-27 06:15:07.920620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.920698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.930 [2024-11-27 06:15:07.920769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.920830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.930 [2024-11-27 06:15:07.920912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.920979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.930 [2024-11-27 06:15:07.921045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.921112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.921228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.921302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.921372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.921433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.921504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.921572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.921653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.921736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.921805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.921891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.921968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.922058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.922119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.922257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.922341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.922417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.922548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.922634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.922733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.922758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.922772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.922787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.922799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.922813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.922826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.922840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.922853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.922866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.922879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.922893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.922905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.922919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.922931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.922945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.922958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.922972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.922984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.923009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.923023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.923036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.923064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.923079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.923091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.923105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.923118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.923132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.930 [2024-11-27 06:15:07.923163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.923178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.930 [2024-11-27 06:15:07.923209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.930 [2024-11-27 06:15:07.923224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.930 [2024-11-27 06:15:07.923273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.923290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.931 [2024-11-27 06:15:07.923304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.923319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.931 [2024-11-27 06:15:07.923332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.923348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.931 [2024-11-27 06:15:07.923361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.923409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.931 [2024-11-27 06:15:07.923423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.923439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.931 [2024-11-27 06:15:07.923454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.923474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149ae00 is same with the state(6) to be set 00:20:17.931 [2024-11-27 06:15:07.923522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.931 [2024-11-27 06:15:07.923549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.931 [2024-11-27 06:15:07.923571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84536 len:8 PRP1 0x0 PRP2 0x0 00:20:17.931 [2024-11-27 06:15:07.923597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.923621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.931 [2024-11-27 06:15:07.923633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.931 [2024-11-27 06:15:07.923644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84992 len:8 PRP1 0x0 PRP2 0x0 00:20:17.931 [2024-11-27 06:15:07.923658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.923672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.931 [2024-11-27 06:15:07.923682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.931 [2024-11-27 06:15:07.923693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85000 len:8 PRP1 0x0 PRP2 0x0 00:20:17.931 [2024-11-27 06:15:07.923707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.923721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.931 [2024-11-27 06:15:07.923731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.931 [2024-11-27 06:15:07.923741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85008 len:8 PRP1 0x0 PRP2 0x0 00:20:17.931 [2024-11-27 06:15:07.923755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.923769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.931 [2024-11-27 06:15:07.923794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.931 [2024-11-27 06:15:07.923810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85016 len:8 PRP1 0x0 PRP2 0x0 00:20:17.931 [2024-11-27 06:15:07.923838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.923853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.931 [2024-11-27 06:15:07.923863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.931 [2024-11-27 06:15:07.923872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85024 len:8 PRP1 0x0 PRP2 0x0 00:20:17.931 [2024-11-27 06:15:07.923885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.923898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.931 [2024-11-27 06:15:07.923908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.931 [2024-11-27 06:15:07.923918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85032 len:8 PRP1 0x0 PRP2 0x0 00:20:17.931 [2024-11-27 06:15:07.923930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.923946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.931 [2024-11-27 06:15:07.923955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.931 [2024-11-27 06:15:07.923965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85040 len:8 PRP1 0x0 PRP2 0x0 00:20:17.931 [2024-11-27 06:15:07.923988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.924002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.931 [2024-11-27 06:15:07.924025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.931 [2024-11-27 06:15:07.924035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85048 len:8 PRP1 0x0 PRP2 0x0 00:20:17.931 [2024-11-27 06:15:07.924064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.924078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.931 [2024-11-27 06:15:07.924087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.931 [2024-11-27 06:15:07.924097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85056 len:8 PRP1 0x0 PRP2 0x0 00:20:17.931 [2024-11-27 06:15:07.924110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.924123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.931 [2024-11-27 06:15:07.924149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.931 [2024-11-27 06:15:07.924171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85064 len:8 PRP1 0x0 PRP2 0x0 00:20:17.931 [2024-11-27 06:15:07.924185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.924199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.931 [2024-11-27 06:15:07.924209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.931 [2024-11-27 06:15:07.924220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85072 len:8 PRP1 0x0 PRP2 0x0 00:20:17.931 [2024-11-27 06:15:07.924233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.924247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.931 [2024-11-27 06:15:07.924273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.931 [2024-11-27 06:15:07.924290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85080 len:8 PRP1 0x0 PRP2 0x0 00:20:17.931 [2024-11-27 06:15:07.924304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.924319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.931 [2024-11-27 06:15:07.924330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.931 [2024-11-27 06:15:07.924341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85088 len:8 PRP1 0x0 PRP2 0x0 00:20:17.931 [2024-11-27 06:15:07.924354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.924369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.931 [2024-11-27 06:15:07.924380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.931 [2024-11-27 06:15:07.924391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85096 len:8 PRP1 0x0 PRP2 0x0 00:20:17.931 [2024-11-27 06:15:07.924404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.924419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.931 [2024-11-27 06:15:07.924429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.931 [2024-11-27 06:15:07.924448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85104 len:8 PRP1 0x0 PRP2 0x0 00:20:17.931 [2024-11-27 06:15:07.924462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.924478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.931 [2024-11-27 06:15:07.924488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.931 [2024-11-27 06:15:07.924514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85112 len:8 PRP1 0x0 PRP2 0x0 00:20:17.931 [2024-11-27 06:15:07.924527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.924616] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:20:17.931 [2024-11-27 06:15:07.924681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.931 [2024-11-27 06:15:07.924703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.924719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.931 [2024-11-27 06:15:07.924733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.924747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.931 [2024-11-27 06:15:07.924760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.924774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.931 [2024-11-27 06:15:07.924788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-11-27 06:15:07.924802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:17.931 [2024-11-27 06:15:07.924859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142bc60 (9): Bad file descriptor 00:20:17.931 [2024-11-27 06:15:07.929132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:17.932 [2024-11-27 06:15:07.957670] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:20:17.932 8443.00 IOPS, 32.98 MiB/s [2024-11-27T06:15:23.029Z] 8212.67 IOPS, 32.08 MiB/s [2024-11-27T06:15:23.029Z] 8129.50 IOPS, 31.76 MiB/s [2024-11-27T06:15:23.029Z] [2024-11-27 06:15:11.655965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.932 [2024-11-27 06:15:11.656029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.932 [2024-11-27 06:15:11.656099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.932 [2024-11-27 06:15:11.656169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.932 [2024-11-27 06:15:11.656263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.932 [2024-11-27 06:15:11.656293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.932 [2024-11-27 06:15:11.656319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.932 [2024-11-27 06:15:11.656346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.932 [2024-11-27 06:15:11.656372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.932 [2024-11-27 06:15:11.656398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.932 [2024-11-27 06:15:11.656425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.932 [2024-11-27 06:15:11.656451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.932 [2024-11-27 06:15:11.656477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.932 [2024-11-27 06:15:11.656503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.932 [2024-11-27 06:15:11.656544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.932 [2024-11-27 06:15:11.656571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.932 [2024-11-27 06:15:11.656608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:61792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.932 [2024-11-27 06:15:11.656652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.932 [2024-11-27 06:15:11.656705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.932 [2024-11-27 06:15:11.656748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.932 [2024-11-27 06:15:11.656793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.932 [2024-11-27 06:15:11.656843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.932 [2024-11-27 06:15:11.656874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.932 [2024-11-27 06:15:11.656905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.932 [2024-11-27 06:15:11.656935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.932 [2024-11-27 06:15:11.656966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.656983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.932 [2024-11-27 06:15:11.656997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.657013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.932 [2024-11-27 06:15:11.657035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.657051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.932 [2024-11-27 06:15:11.657066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.657082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.932 [2024-11-27 06:15:11.657104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.657121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.932 [2024-11-27 06:15:11.657136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.657152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.932 [2024-11-27 06:15:11.657167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.657183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.932 [2024-11-27 06:15:11.657197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.932 [2024-11-27 06:15:11.657213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.932 [2024-11-27 06:15:11.657229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.657261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.933 [2024-11-27 06:15:11.657279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.657295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.933 [2024-11-27 06:15:11.657310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.657325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.933 [2024-11-27 06:15:11.657340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.657364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.933 [2024-11-27 06:15:11.657390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.657420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.933 [2024-11-27 06:15:11.657434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.657450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.933 [2024-11-27 06:15:11.657464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.657484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.933 [2024-11-27 06:15:11.657527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.657542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.933 [2024-11-27 06:15:11.657563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.657577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.933 [2024-11-27 06:15:11.657612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.657660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.933 [2024-11-27 06:15:11.657690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.657706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.933 [2024-11-27 06:15:11.657720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.657737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.933 [2024-11-27 06:15:11.657751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.657767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.933 [2024-11-27 06:15:11.657781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.657798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.933 [2024-11-27 06:15:11.657813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.657829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.933 [2024-11-27 06:15:11.657843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.657859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.933 [2024-11-27 06:15:11.657874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.657896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.933 [2024-11-27 06:15:11.657911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.657926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.933 [2024-11-27 06:15:11.657941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.657957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.933 [2024-11-27 06:15:11.657972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.657988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.933 [2024-11-27 06:15:11.658002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.658018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.933 [2024-11-27 06:15:11.658033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.658055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.933 [2024-11-27 06:15:11.658071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.658087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.933 [2024-11-27 06:15:11.658101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.658117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.933 [2024-11-27 06:15:11.658132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.658149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.933 [2024-11-27 06:15:11.658163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.658202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.933 [2024-11-27 06:15:11.658222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.658239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.933 [2024-11-27 06:15:11.658254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.658270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.933 [2024-11-27 06:15:11.658285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.658301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.933 [2024-11-27 06:15:11.658316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.658332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.933 [2024-11-27 06:15:11.658346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.658362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.933 [2024-11-27 06:15:11.658376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.658392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.933 [2024-11-27 06:15:11.658407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.658429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.933 [2024-11-27 06:15:11.658443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.658459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.933 [2024-11-27 06:15:11.658482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.658499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.933 [2024-11-27 06:15:11.658539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.658554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.933 [2024-11-27 06:15:11.658568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.658583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.933 [2024-11-27 06:15:11.658611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.658626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.933 [2024-11-27 06:15:11.658639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.658654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.933 [2024-11-27 06:15:11.658667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.933 [2024-11-27 06:15:11.658682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.933 [2024-11-27 06:15:11.658718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.658732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.934 [2024-11-27 06:15:11.658745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.658759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.934 [2024-11-27 06:15:11.658773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.658787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.934 [2024-11-27 06:15:11.658799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.658813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.934 [2024-11-27 06:15:11.658826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.658841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.934 [2024-11-27 06:15:11.658853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.658868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.934 [2024-11-27 06:15:11.658881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.658904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.934 [2024-11-27 06:15:11.658918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.658934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.934 [2024-11-27 06:15:11.658947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.658962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.934 [2024-11-27 06:15:11.658975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.658989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.934 [2024-11-27 06:15:11.659001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.934 [2024-11-27 06:15:11.659028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.934 [2024-11-27 06:15:11.659056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.934 [2024-11-27 06:15:11.659083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.934 [2024-11-27 06:15:11.659110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.934 [2024-11-27 06:15:11.659137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.934 [2024-11-27 06:15:11.659180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.934 [2024-11-27 06:15:11.659255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.934 [2024-11-27 06:15:11.659301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.934 [2024-11-27 06:15:11.659332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.934 [2024-11-27 06:15:11.659373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.934 [2024-11-27 06:15:11.659403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.934 [2024-11-27 06:15:11.659433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.934 [2024-11-27 06:15:11.659471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.934 [2024-11-27 06:15:11.659503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.934 [2024-11-27 06:15:11.659533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.934 [2024-11-27 06:15:11.659564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.934 [2024-11-27 06:15:11.659594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.934 [2024-11-27 06:15:11.659625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.934 [2024-11-27 06:15:11.659655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.934 [2024-11-27 06:15:11.659686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.934 [2024-11-27 06:15:11.659716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.934 [2024-11-27 06:15:11.659754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.934 [2024-11-27 06:15:11.659785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.934 [2024-11-27 06:15:11.659825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.934 [2024-11-27 06:15:11.659856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.934 [2024-11-27 06:15:11.659886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.934 [2024-11-27 06:15:11.659919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.934 [2024-11-27 06:15:11.659950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.659976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.934 [2024-11-27 06:15:11.660031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.934 [2024-11-27 06:15:11.660061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.935 [2024-11-27 06:15:11.660075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:11.660090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.935 [2024-11-27 06:15:11.660103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:11.660134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.935 [2024-11-27 06:15:11.660164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:11.660180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.935 [2024-11-27 06:15:11.660195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:11.660211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.935 [2024-11-27 06:15:11.660226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:11.660258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.935 [2024-11-27 06:15:11.660276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:11.660293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.935 [2024-11-27 06:15:11.660308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:11.660324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.935 [2024-11-27 06:15:11.660339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:11.660355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.935 [2024-11-27 06:15:11.660370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:11.660386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.935 [2024-11-27 06:15:11.660400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:11.660417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.935 [2024-11-27 06:15:11.660431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:11.660447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.935 [2024-11-27 06:15:11.660461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:11.660477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.935 [2024-11-27 06:15:11.660492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:11.660508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.935 [2024-11-27 06:15:11.660522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:11.660538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.935 [2024-11-27 06:15:11.660552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:11.660568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f370 is same with the state(6) to be set 00:20:17.935 [2024-11-27 06:15:11.660591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.935 [2024-11-27 06:15:11.660603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.935 [2024-11-27 06:15:11.660615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62232 len:8 PRP1 0x0 PRP2 0x0 00:20:17.935 [2024-11-27 06:15:11.660638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:11.660703] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:20:17.935 [2024-11-27 06:15:11.660816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.935 [2024-11-27 06:15:11.660852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:11.660882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.935 [2024-11-27 06:15:11.660894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:11.660908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.935 [2024-11-27 06:15:11.660920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:11.660934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.935 [2024-11-27 06:15:11.660946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:11.660959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:17.935 [2024-11-27 06:15:11.664987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:17.935 [2024-11-27 06:15:11.665033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142bc60 (9): Bad file descriptor 00:20:17.935 [2024-11-27 06:15:11.694778] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:20:17.935 8012.00 IOPS, 31.30 MiB/s [2024-11-27T06:15:23.032Z] 7991.50 IOPS, 31.22 MiB/s [2024-11-27T06:15:23.032Z] 8102.29 IOPS, 31.65 MiB/s [2024-11-27T06:15:23.032Z] 8219.88 IOPS, 32.11 MiB/s [2024-11-27T06:15:23.032Z] 8264.44 IOPS, 32.28 MiB/s [2024-11-27T06:15:23.032Z] [2024-11-27 06:15:16.247764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.935 [2024-11-27 06:15:16.247834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:16.247879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.935 [2024-11-27 06:15:16.247895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:16.247910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.935 [2024-11-27 06:15:16.247924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:16.247939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.935 [2024-11-27 06:15:16.247969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:16.247983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.935 [2024-11-27 06:15:16.247997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:16.248012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.935 [2024-11-27 06:15:16.248026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:16.248040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.935 [2024-11-27 06:15:16.248082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:16.248099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.935 [2024-11-27 06:15:16.248113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:16.248128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.935 [2024-11-27 06:15:16.248141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:16.248156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.935 [2024-11-27 06:15:16.248183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:16.248198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.935 [2024-11-27 06:15:16.248212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:16.248227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.935 [2024-11-27 06:15:16.248240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:16.248255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.935 [2024-11-27 06:15:16.248268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:16.248283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.935 [2024-11-27 06:15:16.248296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:16.248311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.935 [2024-11-27 06:15:16.248324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.935 [2024-11-27 06:15:16.248338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.935 [2024-11-27 06:15:16.248352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.248367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.248380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.248397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.248427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.248442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.248456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.248480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.248495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.248512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.248526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.248541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.248555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.248570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.248584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.248599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.248613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.248644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.248658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.248673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.248687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.248703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.248717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.248733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.248747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.248762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.248776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.248792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.248806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.248822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.248836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.248851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.248890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.248908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.248922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.248953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.936 [2024-11-27 06:15:16.248967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.248982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.936 [2024-11-27 06:15:16.248996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.249025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.936 [2024-11-27 06:15:16.249039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.249053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.936 [2024-11-27 06:15:16.249067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.249082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:121072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.936 [2024-11-27 06:15:16.249095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.249110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.936 [2024-11-27 06:15:16.249123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.249138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.936 [2024-11-27 06:15:16.249151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.249166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.936 [2024-11-27 06:15:16.249179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.249205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.249219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.249234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.249247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.249262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.249275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.249298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.249312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.249327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.249340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.249355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.249368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.249383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.249396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.249410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.249424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.249439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.249453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.249468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.249481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.249495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.249509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.936 [2024-11-27 06:15:16.249524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.936 [2024-11-27 06:15:16.249537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.249552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.249565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.249580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.249593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.249607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.249621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.249635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.249648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.249669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.249683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.249699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.249713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.249728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.937 [2024-11-27 06:15:16.249741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.249756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:121112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.937 [2024-11-27 06:15:16.249769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.249784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.937 [2024-11-27 06:15:16.249798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.249812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.937 [2024-11-27 06:15:16.249825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.249840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.937 [2024-11-27 06:15:16.249853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.249868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.937 [2024-11-27 06:15:16.249881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.249896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:121152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.937 [2024-11-27 06:15:16.249910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.249925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.937 [2024-11-27 06:15:16.249938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.249952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.249966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.249981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.249994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.250009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.250028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.250043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.250057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.250072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.250085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.250100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.250113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.250154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.250172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.250228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:121824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.250244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.250260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.250276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.250293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.250308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.250325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.250339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.250356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.250371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.250388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.250403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.250419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.250434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.250458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.250473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.250499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.250543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.250559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.250588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.250603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.250616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.250631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.937 [2024-11-27 06:15:16.250645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.250660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.937 [2024-11-27 06:15:16.250674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.250689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.937 [2024-11-27 06:15:16.250703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.250732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.937 [2024-11-27 06:15:16.250745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.250760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.937 [2024-11-27 06:15:16.250773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.250788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.937 [2024-11-27 06:15:16.250801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.937 [2024-11-27 06:15:16.250816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.938 [2024-11-27 06:15:16.250829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.250844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.938 [2024-11-27 06:15:16.250857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.250871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.938 [2024-11-27 06:15:16.250884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.250899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.938 [2024-11-27 06:15:16.250918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.250934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.938 [2024-11-27 06:15:16.250948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.250963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.938 [2024-11-27 06:15:16.250977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.250997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.938 [2024-11-27 06:15:16.251011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.251025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.938 [2024-11-27 06:15:16.251038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.251053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.938 [2024-11-27 06:15:16.251067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.251082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.938 [2024-11-27 06:15:16.251095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.251109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.938 [2024-11-27 06:15:16.251122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.251137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.938 [2024-11-27 06:15:16.251150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.251165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.938 [2024-11-27 06:15:16.251178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.251192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.938 [2024-11-27 06:15:16.251218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.251234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.938 [2024-11-27 06:15:16.251248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.251262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149b8b0 is same with the state(6) to be set 00:20:17.938 [2024-11-27 06:15:16.251277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.938 [2024-11-27 06:15:16.251287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.938 [2024-11-27 06:15:16.251306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121328 len:8 PRP1 0x0 PRP2 0x0 00:20:17.938 [2024-11-27 06:15:16.251320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.251334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.938 [2024-11-27 06:15:16.251344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.938 [2024-11-27 06:15:16.251354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121336 len:8 PRP1 0x0 PRP2 0x0 00:20:17.938 [2024-11-27 06:15:16.251366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.251379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.938 [2024-11-27 06:15:16.251388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.938 [2024-11-27 06:15:16.251407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121344 len:8 PRP1 0x0 PRP2 0x0 00:20:17.938 [2024-11-27 06:15:16.251420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.251433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.938 [2024-11-27 06:15:16.251442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.938 [2024-11-27 06:15:16.251452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121352 len:8 PRP1 0x0 PRP2 0x0 00:20:17.938 [2024-11-27 06:15:16.251464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.251477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.938 [2024-11-27 06:15:16.251486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.938 [2024-11-27 06:15:16.251496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121920 len:8 PRP1 0x0 PRP2 0x0 00:20:17.938 [2024-11-27 06:15:16.251508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.251521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.938 [2024-11-27 06:15:16.251530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.938 [2024-11-27 06:15:16.251539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121928 len:8 PRP1 0x0 PRP2 0x0 00:20:17.938 [2024-11-27 06:15:16.251551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.251564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.938 [2024-11-27 06:15:16.251573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.938 [2024-11-27 06:15:16.251582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121936 len:8 PRP1 0x0 PRP2 0x0 00:20:17.938 [2024-11-27 06:15:16.251595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.251607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.938 [2024-11-27 06:15:16.251617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.938 [2024-11-27 06:15:16.251626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121944 len:8 PRP1 0x0 PRP2 0x0 00:20:17.938 [2024-11-27 06:15:16.251638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.251658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.938 [2024-11-27 06:15:16.251668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.938 [2024-11-27 06:15:16.251678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121952 len:8 PRP1 0x0 PRP2 0x0 00:20:17.938 [2024-11-27 06:15:16.251690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.251702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.938 [2024-11-27 06:15:16.251711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.938 [2024-11-27 06:15:16.251721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121960 len:8 PRP1 0x0 PRP2 0x0 00:20:17.938 [2024-11-27 06:15:16.251733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.251746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.938 [2024-11-27 06:15:16.251755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.938 [2024-11-27 06:15:16.251769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121968 len:8 PRP1 0x0 PRP2 0x0 00:20:17.938 [2024-11-27 06:15:16.251782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.251794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.938 [2024-11-27 06:15:16.251805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.938 [2024-11-27 06:15:16.251814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121976 len:8 PRP1 0x0 PRP2 0x0 00:20:17.938 [2024-11-27 06:15:16.251827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.251840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.938 [2024-11-27 06:15:16.251849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.938 [2024-11-27 06:15:16.251859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121984 len:8 PRP1 0x0 PRP2 0x0 00:20:17.938 [2024-11-27 06:15:16.251871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.251884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.938 [2024-11-27 06:15:16.251893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.938 [2024-11-27 06:15:16.251903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121992 len:8 PRP1 0x0 PRP2 0x0 00:20:17.938 [2024-11-27 06:15:16.251916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.938 [2024-11-27 06:15:16.251929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.938 [2024-11-27 06:15:16.251938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.938 [2024-11-27 06:15:16.251948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121360 len:8 PRP1 0x0 PRP2 0x0 00:20:17.939 [2024-11-27 06:15:16.251960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.939 [2024-11-27 06:15:16.251973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.939 [2024-11-27 06:15:16.251983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.939 [2024-11-27 06:15:16.251992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121368 len:8 PRP1 0x0 PRP2 0x0 00:20:17.939 [2024-11-27 06:15:16.252014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.939 [2024-11-27 06:15:16.252028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.939 [2024-11-27 06:15:16.252037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.939 [2024-11-27 06:15:16.252047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121376 len:8 PRP1 0x0 PRP2 0x0 00:20:17.939 [2024-11-27 06:15:16.252060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.939 [2024-11-27 06:15:16.252072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.939 [2024-11-27 06:15:16.252081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.939 [2024-11-27 06:15:16.252091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121384 len:8 PRP1 0x0 PRP2 0x0 00:20:17.939 [2024-11-27 06:15:16.252103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.939 [2024-11-27 06:15:16.252116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.939 [2024-11-27 06:15:16.252125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.939 [2024-11-27 06:15:16.252151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121392 len:8 PRP1 0x0 PRP2 0x0 00:20:17.939 [2024-11-27 06:15:16.252164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.939 [2024-11-27 06:15:16.252178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.939 [2024-11-27 06:15:16.252188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.939 [2024-11-27 06:15:16.252198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121400 len:8 PRP1 0x0 PRP2 0x0 00:20:17.939 [2024-11-27 06:15:16.252210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.939 [2024-11-27 06:15:16.252223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.939 [2024-11-27 06:15:16.252233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.939 [2024-11-27 06:15:16.252242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121408 len:8 PRP1 0x0 PRP2 0x0 00:20:17.939 [2024-11-27 06:15:16.252255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.939 [2024-11-27 06:15:16.252267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:17.939 [2024-11-27 06:15:16.252276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:17.939 [2024-11-27 06:15:16.252286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121416 len:8 PRP1 0x0 PRP2 0x0 00:20:17.939 [2024-11-27 06:15:16.252298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.939 [2024-11-27 06:15:16.252359] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:20:17.939 [2024-11-27 06:15:16.252433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.939 [2024-11-27 06:15:16.252454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.939 [2024-11-27 06:15:16.252470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.939 [2024-11-27 06:15:16.252483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.939 [2024-11-27 06:15:16.252508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.939 [2024-11-27 06:15:16.252522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.939 [2024-11-27 06:15:16.252535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.939 [2024-11-27 06:15:16.252548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.939 [2024-11-27 06:15:16.252562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:17.939 [2024-11-27 06:15:16.256177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:17.939 [2024-11-27 06:15:16.256218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142bc60 (9): Bad file descriptor 00:20:17.939 [2024-11-27 06:15:16.282024] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:20:17.939 8309.90 IOPS, 32.46 MiB/s [2024-11-27T06:15:23.036Z] 8406.09 IOPS, 32.84 MiB/s [2024-11-27T06:15:23.036Z] 8488.92 IOPS, 33.16 MiB/s [2024-11-27T06:15:23.036Z] 8552.38 IOPS, 33.41 MiB/s [2024-11-27T06:15:23.036Z] 8603.07 IOPS, 33.61 MiB/s [2024-11-27T06:15:23.036Z] 8673.27 IOPS, 33.88 MiB/s 00:20:17.939 Latency(us) 00:20:17.939 [2024-11-27T06:15:23.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.939 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:17.939 Verification LBA range: start 0x0 length 0x4000 00:20:17.939 NVMe0n1 : 15.01 8674.15 33.88 231.11 0.00 14342.64 606.95 25022.84 00:20:17.939 [2024-11-27T06:15:23.036Z] =================================================================================================================== 00:20:17.939 [2024-11-27T06:15:23.036Z] Total : 8674.15 33.88 231.11 0.00 14342.64 606.95 25022.84 00:20:17.939 Received shutdown signal, test time was about 15.000000 seconds 00:20:17.939 00:20:17.939 Latency(us) 00:20:17.939 [2024-11-27T06:15:23.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.939 [2024-11-27T06:15:23.036Z] =================================================================================================================== 00:20:17.939 [2024-11-27T06:15:23.036Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:17.939 06:15:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:20:17.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.939 06:15:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:20:17.939 06:15:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:20:17.939 06:15:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75810 00:20:17.939 06:15:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75810 /var/tmp/bdevperf.sock 00:20:17.939 06:15:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:20:17.939 06:15:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75810 ']' 00:20:17.939 06:15:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.939 06:15:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:17.939 06:15:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.939 06:15:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:17.939 06:15:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:17.939 06:15:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:17.939 06:15:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:20:17.939 06:15:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:17.939 [2024-11-27 06:15:22.831467] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:17.939 06:15:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:20:18.199 [2024-11-27 06:15:23.083902] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:20:18.199 06:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:18.459 NVMe0n1 00:20:18.459 06:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:19.027 00:20:19.027 06:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:19.286 00:20:19.286 06:15:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:19.286 06:15:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:20:19.544 06:15:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:19.803 06:15:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:20:23.092 06:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:23.092 06:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:20:23.092 06:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75879 00:20:23.092 06:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:23.092 06:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75879 00:20:24.030 { 00:20:24.030 "results": [ 00:20:24.030 { 00:20:24.030 "job": "NVMe0n1", 00:20:24.030 "core_mask": "0x1", 00:20:24.030 "workload": "verify", 00:20:24.030 "status": "finished", 00:20:24.030 "verify_range": { 00:20:24.030 "start": 0, 00:20:24.030 "length": 16384 00:20:24.030 }, 00:20:24.030 "queue_depth": 128, 00:20:24.030 "io_size": 4096, 00:20:24.030 "runtime": 1.008137, 00:20:24.030 "iops": 8713.101493150236, 00:20:24.030 "mibps": 34.03555270761811, 00:20:24.030 "io_failed": 0, 00:20:24.030 "io_timeout": 0, 00:20:24.030 "avg_latency_us": 14609.724576916708, 00:20:24.030 "min_latency_us": 1154.3272727272727, 00:20:24.030 "max_latency_us": 15490.327272727272 00:20:24.030 } 00:20:24.030 ], 00:20:24.030 "core_count": 1 00:20:24.030 } 00:20:24.030 06:15:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:24.030 [2024-11-27 06:15:22.126931] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:20:24.030 [2024-11-27 06:15:22.127079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75810 ] 00:20:24.030 [2024-11-27 06:15:22.287006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.030 [2024-11-27 06:15:22.355559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.030 [2024-11-27 06:15:22.416883] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:24.030 [2024-11-27 06:15:24.659123] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:20:24.030 [2024-11-27 06:15:24.659317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.030 [2024-11-27 06:15:24.659344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.030 [2024-11-27 06:15:24.659397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.030 [2024-11-27 06:15:24.659412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.030 [2024-11-27 06:15:24.659428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.030 [2024-11-27 06:15:24.659442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.030 [2024-11-27 06:15:24.659458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.030 [2024-11-27 06:15:24.659472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.030 [2024-11-27 06:15:24.659487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:20:24.030 [2024-11-27 06:15:24.659555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:20:24.030 [2024-11-27 06:15:24.659588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2284c60 (9): Bad file descriptor 00:20:24.030 [2024-11-27 06:15:24.668586] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:20:24.030 Running I/O for 1 seconds... 00:20:24.030 8656.00 IOPS, 33.81 MiB/s 00:20:24.030 Latency(us) 00:20:24.030 [2024-11-27T06:15:29.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.030 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:24.030 Verification LBA range: start 0x0 length 0x4000 00:20:24.030 NVMe0n1 : 1.01 8713.10 34.04 0.00 0.00 14609.72 1154.33 15490.33 00:20:24.030 [2024-11-27T06:15:29.127Z] =================================================================================================================== 00:20:24.030 [2024-11-27T06:15:29.127Z] Total : 8713.10 34.04 0.00 0.00 14609.72 1154.33 15490.33 00:20:24.030 06:15:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:24.030 06:15:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:20:24.596 06:15:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:24.854 06:15:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:24.854 06:15:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:20:25.112 06:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:25.370 06:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:20:28.652 06:15:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:28.652 06:15:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:20:28.652 06:15:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75810 00:20:28.652 06:15:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75810 ']' 00:20:28.652 06:15:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75810 00:20:28.652 06:15:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:20:28.652 06:15:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:28.652 06:15:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75810 00:20:28.652 killing process with pid 75810 00:20:28.652 06:15:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:28.652 06:15:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:28.652 06:15:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75810' 00:20:28.652 06:15:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75810 00:20:28.652 06:15:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75810 00:20:28.910 06:15:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:20:28.910 06:15:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:29.170 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:20:29.170 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:29.170 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:20:29.170 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:29.170 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:20:29.170 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:29.170 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:20:29.170 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:29.170 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:29.170 rmmod nvme_tcp 00:20:29.170 rmmod nvme_fabrics 00:20:29.170 rmmod nvme_keyring 00:20:29.170 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:29.170 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:20:29.170 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:20:29.170 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75561 ']' 00:20:29.170 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75561 00:20:29.170 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75561 ']' 00:20:29.170 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75561 00:20:29.170 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:20:29.170 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.170 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75561 00:20:29.170 killing process with pid 75561 00:20:29.170 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:29.170 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:29.170 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75561' 00:20:29.170 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75561 00:20:29.170 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75561 00:20:29.429 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:29.429 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:29.429 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:29.429 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:20:29.429 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:20:29.429 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:29.429 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:20:29.429 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:29.429 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:29.429 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:29.429 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:29.429 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:29.429 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:29.687 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:29.688 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:29.688 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:29.688 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:29.688 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:29.688 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:29.688 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:29.688 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:29.688 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:29.688 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:29.688 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.688 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:29.688 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.688 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:20:29.688 00:20:29.688 real 0m32.660s 00:20:29.688 user 2m6.174s 00:20:29.688 sys 0m5.601s 00:20:29.688 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:29.688 06:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:29.688 ************************************ 00:20:29.688 END TEST nvmf_failover 00:20:29.688 ************************************ 00:20:29.688 06:15:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:20:29.688 06:15:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:29.688 06:15:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:29.688 06:15:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.688 ************************************ 00:20:29.688 START TEST nvmf_host_discovery 00:20:29.688 ************************************ 00:20:29.688 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:20:29.948 * Looking for test storage... 00:20:29.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:29.948 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:29.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.948 --rc genhtml_branch_coverage=1 00:20:29.948 --rc genhtml_function_coverage=1 00:20:29.948 --rc genhtml_legend=1 00:20:29.948 --rc geninfo_all_blocks=1 00:20:29.948 --rc geninfo_unexecuted_blocks=1 00:20:29.948 00:20:29.948 ' 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:29.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.949 --rc genhtml_branch_coverage=1 00:20:29.949 --rc genhtml_function_coverage=1 00:20:29.949 --rc genhtml_legend=1 00:20:29.949 --rc geninfo_all_blocks=1 00:20:29.949 --rc geninfo_unexecuted_blocks=1 00:20:29.949 00:20:29.949 ' 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:29.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.949 --rc genhtml_branch_coverage=1 00:20:29.949 --rc genhtml_function_coverage=1 00:20:29.949 --rc genhtml_legend=1 00:20:29.949 --rc geninfo_all_blocks=1 00:20:29.949 --rc geninfo_unexecuted_blocks=1 00:20:29.949 00:20:29.949 ' 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:29.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.949 --rc genhtml_branch_coverage=1 00:20:29.949 --rc genhtml_function_coverage=1 00:20:29.949 --rc genhtml_legend=1 00:20:29.949 --rc geninfo_all_blocks=1 00:20:29.949 --rc geninfo_unexecuted_blocks=1 00:20:29.949 00:20:29.949 ' 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:29.949 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:29.949 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.950 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:29.950 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:29.950 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:29.950 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:29.950 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:29.950 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:29.950 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.950 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:29.950 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:29.950 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:29.950 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:29.950 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:29.950 Cannot find device "nvmf_init_br" 00:20:29.950 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:20:29.950 06:15:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:29.950 Cannot find device "nvmf_init_br2" 00:20:29.950 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:20:29.950 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:29.950 Cannot find device "nvmf_tgt_br" 00:20:29.950 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:20:29.950 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:29.950 Cannot find device "nvmf_tgt_br2" 00:20:29.950 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:20:29.950 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:30.208 Cannot find device "nvmf_init_br" 00:20:30.208 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:20:30.208 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:30.208 Cannot find device "nvmf_init_br2" 00:20:30.208 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:20:30.208 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:30.208 Cannot find device "nvmf_tgt_br" 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:30.209 Cannot find device "nvmf_tgt_br2" 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:30.209 Cannot find device "nvmf_br" 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:30.209 Cannot find device "nvmf_init_if" 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:30.209 Cannot find device "nvmf_init_if2" 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:30.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:30.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:30.209 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:30.468 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:30.468 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:20:30.468 00:20:30.468 --- 10.0.0.3 ping statistics --- 00:20:30.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.468 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:30.468 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:30.468 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:20:30.468 00:20:30.468 --- 10.0.0.4 ping statistics --- 00:20:30.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.468 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:30.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:30.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:20:30.468 00:20:30.468 --- 10.0.0.1 ping statistics --- 00:20:30.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.468 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:30.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:30.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:20:30.468 00:20:30.468 --- 10.0.0.2 ping statistics --- 00:20:30.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.468 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=76212 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 76212 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76212 ']' 00:20:30.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:30.468 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:30.468 [2024-11-27 06:15:35.471842] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:20:30.468 [2024-11-27 06:15:35.471933] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.727 [2024-11-27 06:15:35.620816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.727 [2024-11-27 06:15:35.673580] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.727 [2024-11-27 06:15:35.673650] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.727 [2024-11-27 06:15:35.673677] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:30.727 [2024-11-27 06:15:35.673685] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:30.727 [2024-11-27 06:15:35.673692] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.727 [2024-11-27 06:15:35.674107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.727 [2024-11-27 06:15:35.727260] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:30.727 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.727 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:20:30.727 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:30.727 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:30.727 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:30.986 [2024-11-27 06:15:35.843185] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:30.986 [2024-11-27 06:15:35.851325] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:30.986 null0 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:30.986 null1 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76232 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76232 /tmp/host.sock 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76232 ']' 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:30.986 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:30.986 06:15:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:30.986 [2024-11-27 06:15:35.929790] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:20:30.986 [2024-11-27 06:15:35.929879] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76232 ] 00:20:30.986 [2024-11-27 06:15:36.078269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.244 [2024-11-27 06:15:36.142278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.244 [2024-11-27 06:15:36.201805] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:31.245 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:31.245 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:20:31.245 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:31.245 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:31.245 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.245 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.245 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.245 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:20:31.245 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.245 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.245 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.245 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:20:31.245 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:20:31.245 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:31.245 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.245 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.245 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:31.245 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:31.245 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:31.245 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:31.504 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.764 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:20:31.764 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:20:31.764 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:31.764 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.764 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.764 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:31.764 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:31.764 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:31.764 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.765 [2024-11-27 06:15:36.663560] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.765 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:32.024 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.024 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:20:32.024 06:15:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:20:32.283 [2024-11-27 06:15:37.286660] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:32.283 [2024-11-27 06:15:37.286699] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:32.283 [2024-11-27 06:15:37.286757] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:32.283 [2024-11-27 06:15:37.292719] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:20:32.283 [2024-11-27 06:15:37.347152] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:20:32.283 [2024-11-27 06:15:37.348373] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x19aee60:1 started. 00:20:32.283 [2024-11-27 06:15:37.350200] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:32.283 [2024-11-27 06:15:37.350225] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:32.283 [2024-11-27 06:15:37.355289] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x19aee60 was disconnected and freed. delete nvme_qpair. 00:20:32.858 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:32.858 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:32.858 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:32.858 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:32.858 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.858 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:32.858 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:32.858 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:32.858 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:32.858 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.128 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.128 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.128 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:33.128 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:33.128 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:33.128 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.128 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:20:33.128 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:33.129 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:33.129 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.129 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.129 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:33.129 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:33.129 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:33.129 06:15:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.129 [2024-11-27 06:15:38.149025] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x19bd2f0:1 started. 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:33.129 [2024-11-27 06:15:38.155571] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x19bd2f0 was disconnected and freed. delete nvme_qpair. 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:33.129 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:33.388 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:20:33.388 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:33.388 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.388 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.388 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.388 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:20:33.388 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:33.388 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:33.388 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.388 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:20:33.388 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.388 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.388 [2024-11-27 06:15:38.280919] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:33.388 [2024-11-27 06:15:38.281324] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:33.388 [2024-11-27 06:15:38.281359] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:33.388 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.388 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:33.388 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:33.388 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:33.388 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.388 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:33.388 [2024-11-27 06:15:38.287340] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:20:33.388 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:33.388 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:33.388 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.388 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:33.388 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:33.389 [2024-11-27 06:15:38.349798] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:20:33.389 [2024-11-27 06:15:38.349868] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:33.389 [2024-11-27 06:15:38.349880] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:33.389 [2024-11-27 06:15:38.349886] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:33.389 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.648 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:33.648 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:33.648 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:33.648 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.648 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:33.648 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.648 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.648 [2024-11-27 06:15:38.501710] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:33.648 [2024-11-27 06:15:38.501759] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:33.648 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.648 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:33.648 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:33.648 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:33.648 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.648 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:33.648 [2024-11-27 06:15:38.507336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.648 [2024-11-27 06:15:38.507376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.648 [2024-11-27 06:15:38.507391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.648 [2024-11-27 06:15:38.507401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.648 [2024-11-27 06:15:38.507411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.648 [2024-11-27 06:15:38.507420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.648 [2024-11-27 06:15:38.507436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.648 [2024-11-27 06:15:38.507446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.648 [2024-11-27 06:15:38.507455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b240 is same with the state(6) to be set 00:20:33.648 [2024-11-27 06:15:38.507741] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:20:33.649 [2024-11-27 06:15:38.507768] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:33.649 [2024-11-27 06:15:38.507824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198b240 (9): Bad file descriptor 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.649 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.907 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.907 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:20:33.907 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:20:33.907 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:33.907 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.907 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:20:33.907 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:33.907 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:33.907 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:33.907 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:33.907 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.907 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.907 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.908 06:15:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.282 [2024-11-27 06:15:39.947474] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:35.282 [2024-11-27 06:15:39.947511] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:35.282 [2024-11-27 06:15:39.947534] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:35.282 [2024-11-27 06:15:39.953506] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:20:35.282 [2024-11-27 06:15:40.011849] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:20:35.282 [2024-11-27 06:15:40.012632] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x19bb6d0:1 started. 00:20:35.282 [2024-11-27 06:15:40.014901] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:35.282 [2024-11-27 06:15:40.014945] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:35.282 [2024-11-27 06:15:40.016309] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x19bb6d0 was disconnected and freed. delete nvme_qpair. 00:20:35.282 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.282 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:35.282 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:20:35.282 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:35.282 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:35.282 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.283 request: 00:20:35.283 { 00:20:35.283 "name": "nvme", 00:20:35.283 "trtype": "tcp", 00:20:35.283 "traddr": "10.0.0.3", 00:20:35.283 "adrfam": "ipv4", 00:20:35.283 "trsvcid": "8009", 00:20:35.283 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:35.283 "wait_for_attach": true, 00:20:35.283 "method": "bdev_nvme_start_discovery", 00:20:35.283 "req_id": 1 00:20:35.283 } 00:20:35.283 Got JSON-RPC error response 00:20:35.283 response: 00:20:35.283 { 00:20:35.283 "code": -17, 00:20:35.283 "message": "File exists" 00:20:35.283 } 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.283 request: 00:20:35.283 { 00:20:35.283 "name": "nvme_second", 00:20:35.283 "trtype": "tcp", 00:20:35.283 "traddr": "10.0.0.3", 00:20:35.283 "adrfam": "ipv4", 00:20:35.283 "trsvcid": "8009", 00:20:35.283 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:35.283 "wait_for_attach": true, 00:20:35.283 "method": "bdev_nvme_start_discovery", 00:20:35.283 "req_id": 1 00:20:35.283 } 00:20:35.283 Got JSON-RPC error response 00:20:35.283 response: 00:20:35.283 { 00:20:35.283 "code": -17, 00:20:35.283 "message": "File exists" 00:20:35.283 } 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.283 06:15:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.220 [2024-11-27 06:15:41.283307] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.220 [2024-11-27 06:15:41.283359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fd0 with addr=10.0.0.3, port=8010 00:20:36.220 [2024-11-27 06:15:41.283386] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:36.220 [2024-11-27 06:15:41.283397] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:36.220 [2024-11-27 06:15:41.283407] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:20:37.596 [2024-11-27 06:15:42.283305] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.596 [2024-11-27 06:15:42.283410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fd0 with addr=10.0.0.3, port=8010 00:20:37.596 [2024-11-27 06:15:42.283446] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:37.596 [2024-11-27 06:15:42.283457] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:37.596 [2024-11-27 06:15:42.283466] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:20:38.531 [2024-11-27 06:15:43.283150] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:20:38.531 request: 00:20:38.531 { 00:20:38.531 "name": "nvme_second", 00:20:38.531 "trtype": "tcp", 00:20:38.531 "traddr": "10.0.0.3", 00:20:38.531 "adrfam": "ipv4", 00:20:38.531 "trsvcid": "8010", 00:20:38.531 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:38.531 "wait_for_attach": false, 00:20:38.531 "attach_timeout_ms": 3000, 00:20:38.531 "method": "bdev_nvme_start_discovery", 00:20:38.531 "req_id": 1 00:20:38.531 } 00:20:38.531 Got JSON-RPC error response 00:20:38.531 response: 00:20:38.531 { 00:20:38.531 "code": -110, 00:20:38.531 "message": "Connection timed out" 00:20:38.531 } 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76232 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:38.531 rmmod nvme_tcp 00:20:38.531 rmmod nvme_fabrics 00:20:38.531 rmmod nvme_keyring 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:20:38.531 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:20:38.532 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 76212 ']' 00:20:38.532 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 76212 00:20:38.532 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 76212 ']' 00:20:38.532 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 76212 00:20:38.532 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:20:38.532 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.532 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76212 00:20:38.532 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:38.532 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:38.532 killing process with pid 76212 00:20:38.532 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76212' 00:20:38.532 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 76212 00:20:38.532 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 76212 00:20:38.790 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:38.790 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:38.790 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:38.790 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:20:38.790 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:20:38.790 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:38.790 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:20:38.790 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:38.790 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:38.790 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:38.790 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:38.790 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:38.790 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:38.790 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:38.790 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:38.790 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:38.790 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:38.790 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:39.048 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:39.048 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:39.048 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:39.048 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:39.048 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:39.048 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.048 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.048 06:15:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.048 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:20:39.048 00:20:39.048 real 0m9.264s 00:20:39.048 user 0m17.408s 00:20:39.048 sys 0m2.088s 00:20:39.048 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:39.048 ************************************ 00:20:39.048 END TEST nvmf_host_discovery 00:20:39.048 ************************************ 00:20:39.048 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:39.048 06:15:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:20:39.048 06:15:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:39.048 06:15:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:39.048 06:15:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.048 ************************************ 00:20:39.048 START TEST nvmf_host_multipath_status 00:20:39.048 ************************************ 00:20:39.048 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:20:39.308 * Looking for test storage... 00:20:39.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:39.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.308 --rc genhtml_branch_coverage=1 00:20:39.308 --rc genhtml_function_coverage=1 00:20:39.308 --rc genhtml_legend=1 00:20:39.308 --rc geninfo_all_blocks=1 00:20:39.308 --rc geninfo_unexecuted_blocks=1 00:20:39.308 00:20:39.308 ' 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:39.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.308 --rc genhtml_branch_coverage=1 00:20:39.308 --rc genhtml_function_coverage=1 00:20:39.308 --rc genhtml_legend=1 00:20:39.308 --rc geninfo_all_blocks=1 00:20:39.308 --rc geninfo_unexecuted_blocks=1 00:20:39.308 00:20:39.308 ' 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:39.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.308 --rc genhtml_branch_coverage=1 00:20:39.308 --rc genhtml_function_coverage=1 00:20:39.308 --rc genhtml_legend=1 00:20:39.308 --rc geninfo_all_blocks=1 00:20:39.308 --rc geninfo_unexecuted_blocks=1 00:20:39.308 00:20:39.308 ' 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:39.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.308 --rc genhtml_branch_coverage=1 00:20:39.308 --rc genhtml_function_coverage=1 00:20:39.308 --rc genhtml_legend=1 00:20:39.308 --rc geninfo_all_blocks=1 00:20:39.308 --rc geninfo_unexecuted_blocks=1 00:20:39.308 00:20:39.308 ' 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.308 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:39.309 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:39.309 Cannot find device "nvmf_init_br" 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:39.309 Cannot find device "nvmf_init_br2" 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:39.309 Cannot find device "nvmf_tgt_br" 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:39.309 Cannot find device "nvmf_tgt_br2" 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:39.309 Cannot find device "nvmf_init_br" 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:39.309 Cannot find device "nvmf_init_br2" 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:20:39.309 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:39.569 Cannot find device "nvmf_tgt_br" 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:39.569 Cannot find device "nvmf_tgt_br2" 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:39.569 Cannot find device "nvmf_br" 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:39.569 Cannot find device "nvmf_init_if" 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:39.569 Cannot find device "nvmf_init_if2" 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:39.569 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:39.569 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:39.569 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:39.828 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:39.828 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:39.828 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:39.828 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:39.828 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:39.828 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:39.828 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:39.828 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:39.828 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:20:39.828 00:20:39.828 --- 10.0.0.3 ping statistics --- 00:20:39.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.828 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:20:39.828 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:39.828 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:39.828 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.074 ms 00:20:39.828 00:20:39.828 --- 10.0.0.4 ping statistics --- 00:20:39.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.828 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:20:39.828 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:39.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:39.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:39.828 00:20:39.828 --- 10.0.0.1 ping statistics --- 00:20:39.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.828 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:39.828 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:39.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:39.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:20:39.828 00:20:39.828 --- 10.0.0.2 ping statistics --- 00:20:39.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.828 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:20:39.828 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:39.828 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:20:39.828 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:39.828 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:39.828 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:39.828 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:39.828 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:39.828 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:39.828 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:39.828 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:20:39.828 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:39.828 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:39.828 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:39.828 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76726 00:20:39.829 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76726 00:20:39.829 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:39.829 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76726 ']' 00:20:39.829 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.829 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.829 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.829 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.829 06:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:39.829 [2024-11-27 06:15:44.808226] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:20:39.829 [2024-11-27 06:15:44.808319] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.087 [2024-11-27 06:15:44.964636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:40.087 [2024-11-27 06:15:45.037715] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.087 [2024-11-27 06:15:45.037795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.087 [2024-11-27 06:15:45.037811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.087 [2024-11-27 06:15:45.037822] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.087 [2024-11-27 06:15:45.037831] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.087 [2024-11-27 06:15:45.039181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.087 [2024-11-27 06:15:45.039187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.087 [2024-11-27 06:15:45.101271] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:40.087 06:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.087 06:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:20:40.344 06:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:40.344 06:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:40.344 06:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:40.344 06:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.344 06:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76726 00:20:40.344 06:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:40.603 [2024-11-27 06:15:45.535811] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.603 06:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:40.861 Malloc0 00:20:40.861 06:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:41.120 06:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:41.687 06:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:41.946 [2024-11-27 06:15:46.806077] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:41.946 06:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:42.204 [2024-11-27 06:15:47.074144] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:42.204 06:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76775 00:20:42.204 06:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:42.204 06:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76775 /var/tmp/bdevperf.sock 00:20:42.204 06:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:42.204 06:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76775 ']' 00:20:42.204 06:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:42.204 06:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:42.204 06:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:42.204 06:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.204 06:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:42.462 06:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.462 06:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:20:42.462 06:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:43.027 06:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:43.287 Nvme0n1 00:20:43.287 06:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:43.589 Nvme0n1 00:20:43.589 06:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:20:43.589 06:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:46.120 06:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:20:46.120 06:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:46.120 06:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:46.378 06:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:20:47.315 06:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:20:47.315 06:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:47.315 06:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:47.315 06:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:47.587 06:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:47.587 06:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:47.587 06:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:47.587 06:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:47.847 06:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:47.847 06:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:47.847 06:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:47.847 06:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:48.106 06:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:48.106 06:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:48.106 06:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:48.106 06:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:48.365 06:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:48.365 06:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:48.366 06:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:48.366 06:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:48.624 06:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:48.624 06:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:48.624 06:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:48.624 06:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:48.883 06:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:48.883 06:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:20:48.883 06:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:49.143 06:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:49.401 06:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:20:50.340 06:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:20:50.340 06:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:50.340 06:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:50.340 06:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:50.599 06:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:50.599 06:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:50.599 06:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:50.599 06:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:50.857 06:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:50.857 06:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:50.857 06:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:50.857 06:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:51.425 06:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:51.425 06:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:51.425 06:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:51.425 06:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:51.425 06:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:51.425 06:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:51.425 06:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:51.425 06:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:51.683 06:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:51.683 06:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:51.683 06:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:51.683 06:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:51.942 06:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:51.942 06:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:20:51.942 06:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:52.202 06:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:52.461 06:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:20:53.840 06:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:20:53.840 06:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:53.840 06:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:53.840 06:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:53.840 06:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:53.840 06:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:53.840 06:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:53.840 06:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:54.099 06:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:54.099 06:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:54.099 06:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:54.099 06:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:54.357 06:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:54.357 06:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:54.357 06:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:54.357 06:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:54.616 06:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:54.616 06:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:54.616 06:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:54.616 06:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:54.874 06:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:54.874 06:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:54.874 06:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:54.874 06:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:55.133 06:16:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:55.133 06:16:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:20:55.133 06:16:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:55.392 06:16:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:55.650 06:16:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:20:57.027 06:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:20:57.027 06:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:57.027 06:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:57.027 06:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:57.027 06:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:57.027 06:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:57.027 06:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:57.027 06:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:57.299 06:16:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:57.299 06:16:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:57.299 06:16:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:57.299 06:16:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:57.568 06:16:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:57.568 06:16:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:57.568 06:16:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:57.568 06:16:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:57.827 06:16:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:57.827 06:16:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:57.827 06:16:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:57.827 06:16:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:58.085 06:16:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:58.085 06:16:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:58.085 06:16:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:58.085 06:16:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:58.344 06:16:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:58.344 06:16:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:20:58.344 06:16:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:58.603 06:16:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:58.862 06:16:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:20:59.829 06:16:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:20:59.829 06:16:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:59.829 06:16:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:59.829 06:16:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:00.089 06:16:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:00.089 06:16:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:00.089 06:16:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:00.089 06:16:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:00.348 06:16:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:00.348 06:16:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:00.348 06:16:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:00.348 06:16:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:00.607 06:16:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:00.607 06:16:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:00.607 06:16:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:00.607 06:16:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:00.866 06:16:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:00.866 06:16:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:00.866 06:16:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:00.866 06:16:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:01.124 06:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:01.124 06:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:01.124 06:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:01.124 06:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:01.383 06:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:01.383 06:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:21:01.383 06:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:01.642 06:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:01.901 06:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:21:03.278 06:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:21:03.278 06:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:03.278 06:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:03.278 06:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:03.278 06:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:03.278 06:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:03.278 06:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:03.278 06:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:03.538 06:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:03.538 06:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:03.538 06:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:03.538 06:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:03.798 06:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:03.798 06:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:03.798 06:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:03.798 06:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:04.057 06:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:04.057 06:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:04.057 06:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:04.057 06:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:04.315 06:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:04.315 06:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:04.315 06:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:04.315 06:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:04.573 06:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:04.573 06:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:21:04.832 06:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:21:04.832 06:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:21:05.091 06:16:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:05.349 06:16:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:21:06.287 06:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:21:06.287 06:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:06.287 06:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:06.287 06:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:06.546 06:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:06.546 06:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:06.546 06:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:06.546 06:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:06.807 06:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:06.807 06:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:06.807 06:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:06.807 06:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:07.066 06:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:07.066 06:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:07.066 06:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:07.066 06:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:07.325 06:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:07.325 06:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:07.325 06:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:07.325 06:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:07.585 06:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:07.585 06:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:07.585 06:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:07.585 06:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:08.153 06:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:08.153 06:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:21:08.153 06:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:08.153 06:16:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:08.413 06:16:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:21:09.789 06:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:21:09.790 06:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:09.790 06:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:09.790 06:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:09.790 06:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:09.790 06:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:09.790 06:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:09.790 06:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:10.048 06:16:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:10.048 06:16:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:10.048 06:16:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:10.048 06:16:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:10.307 06:16:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:10.307 06:16:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:10.307 06:16:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:10.307 06:16:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:10.566 06:16:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:10.566 06:16:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:10.566 06:16:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:10.566 06:16:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:11.133 06:16:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:11.133 06:16:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:11.133 06:16:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:11.133 06:16:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:11.391 06:16:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:11.391 06:16:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:21:11.391 06:16:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:11.649 06:16:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:21:11.649 06:16:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:21:13.023 06:16:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:21:13.024 06:16:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:13.024 06:16:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.024 06:16:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:13.024 06:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:13.024 06:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:13.024 06:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.024 06:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:13.282 06:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:13.282 06:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:13.282 06:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.282 06:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:13.540 06:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:13.540 06:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:13.540 06:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.540 06:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:14.107 06:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:14.107 06:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:14.107 06:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:14.107 06:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:14.364 06:16:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:14.364 06:16:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:14.364 06:16:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:14.364 06:16:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:14.622 06:16:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:14.622 06:16:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:21:14.622 06:16:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:14.881 06:16:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:15.138 06:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:21:16.075 06:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:21:16.075 06:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:16.075 06:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:16.075 06:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:16.334 06:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:16.334 06:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:16.334 06:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:16.334 06:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:16.593 06:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:16.593 06:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:16.593 06:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:16.593 06:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:16.852 06:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:16.852 06:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:16.852 06:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:16.852 06:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:17.111 06:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:17.111 06:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:17.111 06:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.111 06:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:17.369 06:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:17.369 06:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:17.369 06:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.369 06:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:17.938 06:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:17.938 06:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76775 00:21:17.938 06:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76775 ']' 00:21:17.938 06:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76775 00:21:17.938 06:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:21:17.938 06:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.938 06:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76775 00:21:17.938 killing process with pid 76775 00:21:17.938 06:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:17.938 06:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:17.938 06:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76775' 00:21:17.938 06:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76775 00:21:17.938 06:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76775 00:21:17.938 { 00:21:17.938 "results": [ 00:21:17.938 { 00:21:17.938 "job": "Nvme0n1", 00:21:17.938 "core_mask": "0x4", 00:21:17.938 "workload": "verify", 00:21:17.938 "status": "terminated", 00:21:17.938 "verify_range": { 00:21:17.938 "start": 0, 00:21:17.938 "length": 16384 00:21:17.938 }, 00:21:17.938 "queue_depth": 128, 00:21:17.938 "io_size": 4096, 00:21:17.938 "runtime": 34.082707, 00:21:17.938 "iops": 9309.207745734515, 00:21:17.938 "mibps": 36.36409275677545, 00:21:17.938 "io_failed": 0, 00:21:17.938 "io_timeout": 0, 00:21:17.938 "avg_latency_us": 13720.486854150568, 00:21:17.938 "min_latency_us": 301.61454545454546, 00:21:17.938 "max_latency_us": 4026531.84 00:21:17.938 } 00:21:17.938 ], 00:21:17.938 "core_count": 1 00:21:17.938 } 00:21:18.201 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76775 00:21:18.201 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:18.201 [2024-11-27 06:15:47.155871] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:21:18.201 [2024-11-27 06:15:47.155998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76775 ] 00:21:18.201 [2024-11-27 06:15:47.306345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.201 [2024-11-27 06:15:47.371645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.201 [2024-11-27 06:15:47.428103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:18.201 Running I/O for 90 seconds... 00:21:18.201 8800.00 IOPS, 34.38 MiB/s [2024-11-27T06:16:23.298Z] 9440.00 IOPS, 36.88 MiB/s [2024-11-27T06:16:23.298Z] 9603.00 IOPS, 37.51 MiB/s [2024-11-27T06:16:23.298Z] 9664.25 IOPS, 37.75 MiB/s [2024-11-27T06:16:23.298Z] 9719.40 IOPS, 37.97 MiB/s [2024-11-27T06:16:23.298Z] 9789.17 IOPS, 38.24 MiB/s [2024-11-27T06:16:23.298Z] 9829.86 IOPS, 38.40 MiB/s [2024-11-27T06:16:23.298Z] 9865.12 IOPS, 38.54 MiB/s [2024-11-27T06:16:23.298Z] 9891.56 IOPS, 38.64 MiB/s [2024-11-27T06:16:23.298Z] 9923.20 IOPS, 38.76 MiB/s [2024-11-27T06:16:23.298Z] 9940.36 IOPS, 38.83 MiB/s [2024-11-27T06:16:23.298Z] 9928.00 IOPS, 38.78 MiB/s [2024-11-27T06:16:23.298Z] 9905.85 IOPS, 38.69 MiB/s [2024-11-27T06:16:23.298Z] 9869.00 IOPS, 38.55 MiB/s [2024-11-27T06:16:23.298Z] [2024-11-27 06:16:03.544353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.201 [2024-11-27 06:16:03.544454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:18.201 [2024-11-27 06:16:03.544522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.201 [2024-11-27 06:16:03.544547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:18.201 [2024-11-27 06:16:03.544573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:121552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.201 [2024-11-27 06:16:03.544592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:18.201 [2024-11-27 06:16:03.544616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.201 [2024-11-27 06:16:03.544634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:18.201 [2024-11-27 06:16:03.544658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.201 [2024-11-27 06:16:03.544677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:18.201 [2024-11-27 06:16:03.544701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.201 [2024-11-27 06:16:03.544719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:18.201 [2024-11-27 06:16:03.544743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:121584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.201 [2024-11-27 06:16:03.544761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:18.201 [2024-11-27 06:16:03.544785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.201 [2024-11-27 06:16:03.544803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:18.201 [2024-11-27 06:16:03.544827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.201 [2024-11-27 06:16:03.544846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:18.201 [2024-11-27 06:16:03.544917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:121032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.201 [2024-11-27 06:16:03.544937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:18.201 [2024-11-27 06:16:03.544960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.201 [2024-11-27 06:16:03.544977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.201 [2024-11-27 06:16:03.545000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.201 [2024-11-27 06:16:03.545017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.201 [2024-11-27 06:16:03.545040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.201 [2024-11-27 06:16:03.545059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:18.201 [2024-11-27 06:16:03.545081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:121064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.201 [2024-11-27 06:16:03.545099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:18.201 [2024-11-27 06:16:03.545121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.201 [2024-11-27 06:16:03.545156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:18.201 [2024-11-27 06:16:03.545184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.201 [2024-11-27 06:16:03.545203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:18.201 [2024-11-27 06:16:03.545380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.201 [2024-11-27 06:16:03.545407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:18.201 [2024-11-27 06:16:03.545436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.201 [2024-11-27 06:16:03.545456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:18.201 [2024-11-27 06:16:03.545480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.201 [2024-11-27 06:16:03.545498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:18.201 [2024-11-27 06:16:03.545535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.201 [2024-11-27 06:16:03.545553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:18.201 [2024-11-27 06:16:03.545576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.201 [2024-11-27 06:16:03.545599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:18.201 [2024-11-27 06:16:03.545639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.201 [2024-11-27 06:16:03.545659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:18.201 [2024-11-27 06:16:03.545683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.201 [2024-11-27 06:16:03.545703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:18.201 [2024-11-27 06:16:03.545726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.201 [2024-11-27 06:16:03.545745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:18.201 [2024-11-27 06:16:03.545769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.202 [2024-11-27 06:16:03.545789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.545814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.202 [2024-11-27 06:16:03.545833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.545857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.202 [2024-11-27 06:16:03.545876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.545899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:121112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.202 [2024-11-27 06:16:03.545918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.545942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.202 [2024-11-27 06:16:03.545960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.545984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.202 [2024-11-27 06:16:03.546004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.546028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.202 [2024-11-27 06:16:03.546046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.546070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.202 [2024-11-27 06:16:03.546088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.546113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.202 [2024-11-27 06:16:03.546150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.546193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.202 [2024-11-27 06:16:03.546227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.546253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:121168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.202 [2024-11-27 06:16:03.546272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.546298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.202 [2024-11-27 06:16:03.546317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.546342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.202 [2024-11-27 06:16:03.546360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.546384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.202 [2024-11-27 06:16:03.546403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.546427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.202 [2024-11-27 06:16:03.546446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.546470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.202 [2024-11-27 06:16:03.546488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.546512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.202 [2024-11-27 06:16:03.546529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.546563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.202 [2024-11-27 06:16:03.546581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.546605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.202 [2024-11-27 06:16:03.546623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.546647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.202 [2024-11-27 06:16:03.546669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.546692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.202 [2024-11-27 06:16:03.546710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.546735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.202 [2024-11-27 06:16:03.546764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.546790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.202 [2024-11-27 06:16:03.546810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.546834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.202 [2024-11-27 06:16:03.546853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.546877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.202 [2024-11-27 06:16:03.546896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.546922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.202 [2024-11-27 06:16:03.546941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.546964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.202 [2024-11-27 06:16:03.546984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.547008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.202 [2024-11-27 06:16:03.547028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.547051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.202 [2024-11-27 06:16:03.547070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.547094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.202 [2024-11-27 06:16:03.547113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.547150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.202 [2024-11-27 06:16:03.547183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.547208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.202 [2024-11-27 06:16:03.547227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.547250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.202 [2024-11-27 06:16:03.547269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.547293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.202 [2024-11-27 06:16:03.547328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.547355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.202 [2024-11-27 06:16:03.547374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.547399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.202 [2024-11-27 06:16:03.547418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.547443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.202 [2024-11-27 06:16:03.547461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.547485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.202 [2024-11-27 06:16:03.547504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:18.202 [2024-11-27 06:16:03.547529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:121776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.203 [2024-11-27 06:16:03.547552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.547586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.203 [2024-11-27 06:16:03.547605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.547633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.203 [2024-11-27 06:16:03.547653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.547678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.203 [2024-11-27 06:16:03.547697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.547720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.203 [2024-11-27 06:16:03.547738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.547763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.203 [2024-11-27 06:16:03.547781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.547805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.203 [2024-11-27 06:16:03.547824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.547848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.203 [2024-11-27 06:16:03.547867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.547903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.203 [2024-11-27 06:16:03.547924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.547949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.203 [2024-11-27 06:16:03.547968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.548017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.203 [2024-11-27 06:16:03.548042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.548068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.203 [2024-11-27 06:16:03.548088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.548114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.203 [2024-11-27 06:16:03.548163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.548189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.203 [2024-11-27 06:16:03.548207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.548231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.203 [2024-11-27 06:16:03.548250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.548273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.203 [2024-11-27 06:16:03.548292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.548321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.203 [2024-11-27 06:16:03.548340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.548365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.203 [2024-11-27 06:16:03.548384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.548408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.203 [2024-11-27 06:16:03.548428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.548452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.203 [2024-11-27 06:16:03.548471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.548514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.203 [2024-11-27 06:16:03.548535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.548570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.203 [2024-11-27 06:16:03.548590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.548615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.203 [2024-11-27 06:16:03.548634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.548657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.203 [2024-11-27 06:16:03.548676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.548700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.203 [2024-11-27 06:16:03.548719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.548743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.203 [2024-11-27 06:16:03.548761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.548786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.203 [2024-11-27 06:16:03.548804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.548828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.203 [2024-11-27 06:16:03.548847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.548871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.203 [2024-11-27 06:16:03.548889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.548912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.203 [2024-11-27 06:16:03.548930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.548956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.203 [2024-11-27 06:16:03.548974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.548998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.203 [2024-11-27 06:16:03.549017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.549051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.203 [2024-11-27 06:16:03.549073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.549097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.203 [2024-11-27 06:16:03.549115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.549161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.203 [2024-11-27 06:16:03.549182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.549206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.203 [2024-11-27 06:16:03.549238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.549264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.203 [2024-11-27 06:16:03.549283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.549313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.203 [2024-11-27 06:16:03.549332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:18.203 [2024-11-27 06:16:03.549356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.204 [2024-11-27 06:16:03.549375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:03.549400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.204 [2024-11-27 06:16:03.549418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:03.549443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.204 [2024-11-27 06:16:03.549461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:03.549486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.204 [2024-11-27 06:16:03.549504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:03.549528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.204 [2024-11-27 06:16:03.549546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:03.549570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.204 [2024-11-27 06:16:03.549588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:03.549613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.204 [2024-11-27 06:16:03.549642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:03.549668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.204 [2024-11-27 06:16:03.549688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:03.549711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.204 [2024-11-27 06:16:03.549730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:03.549754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:121512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.204 [2024-11-27 06:16:03.549772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:03.549797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.204 [2024-11-27 06:16:03.549815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:03.550597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.204 [2024-11-27 06:16:03.550638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:03.550681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.204 [2024-11-27 06:16:03.550703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:03.550734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.204 [2024-11-27 06:16:03.550761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:03.550794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.204 [2024-11-27 06:16:03.550815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:03.550853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.204 [2024-11-27 06:16:03.550873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:03.550918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.204 [2024-11-27 06:16:03.550936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:03.550967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.204 [2024-11-27 06:16:03.550986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:03.551016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.204 [2024-11-27 06:16:03.551049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:03.551101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.204 [2024-11-27 06:16:03.551126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:03.551191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.204 [2024-11-27 06:16:03.551215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:03.551245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.204 [2024-11-27 06:16:03.551266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:03.551295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:122000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.204 [2024-11-27 06:16:03.551314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:03.551344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.204 [2024-11-27 06:16:03.551363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:18.204 9746.67 IOPS, 38.07 MiB/s [2024-11-27T06:16:23.301Z] 9137.50 IOPS, 35.69 MiB/s [2024-11-27T06:16:23.301Z] 8600.00 IOPS, 33.59 MiB/s [2024-11-27T06:16:23.301Z] 8122.22 IOPS, 31.73 MiB/s [2024-11-27T06:16:23.301Z] 7787.63 IOPS, 30.42 MiB/s [2024-11-27T06:16:23.301Z] 7890.65 IOPS, 30.82 MiB/s [2024-11-27T06:16:23.301Z] 7984.62 IOPS, 31.19 MiB/s [2024-11-27T06:16:23.301Z] 8172.32 IOPS, 31.92 MiB/s [2024-11-27T06:16:23.301Z] 8384.22 IOPS, 32.75 MiB/s [2024-11-27T06:16:23.301Z] 8559.92 IOPS, 33.44 MiB/s [2024-11-27T06:16:23.301Z] 8677.92 IOPS, 33.90 MiB/s [2024-11-27T06:16:23.301Z] 8714.88 IOPS, 34.04 MiB/s [2024-11-27T06:16:23.301Z] 8751.63 IOPS, 34.19 MiB/s [2024-11-27T06:16:23.301Z] 8796.50 IOPS, 34.36 MiB/s [2024-11-27T06:16:23.301Z] 8952.62 IOPS, 34.97 MiB/s [2024-11-27T06:16:23.301Z] 9089.50 IOPS, 35.51 MiB/s [2024-11-27T06:16:23.301Z] 9219.16 IOPS, 36.01 MiB/s [2024-11-27T06:16:23.301Z] [2024-11-27 06:16:20.030335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.204 [2024-11-27 06:16:20.030410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:20.030463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.204 [2024-11-27 06:16:20.030498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:20.030531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:91336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.204 [2024-11-27 06:16:20.030549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:20.030572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.204 [2024-11-27 06:16:20.030589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:20.030614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.204 [2024-11-27 06:16:20.030633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:20.030698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.204 [2024-11-27 06:16:20.030717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:20.030740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.204 [2024-11-27 06:16:20.030758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:20.030781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.204 [2024-11-27 06:16:20.030798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:20.030821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.204 [2024-11-27 06:16:20.030838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:20.030862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.204 [2024-11-27 06:16:20.030879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:20.030901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.204 [2024-11-27 06:16:20.030917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:20.030940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.204 [2024-11-27 06:16:20.030956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:18.204 [2024-11-27 06:16:20.030978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.205 [2024-11-27 06:16:20.030995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.031017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.205 [2024-11-27 06:16:20.031034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.031057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.205 [2024-11-27 06:16:20.031074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.031098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:91032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.205 [2024-11-27 06:16:20.031115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.031196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.205 [2024-11-27 06:16:20.031220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.031250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.205 [2024-11-27 06:16:20.031284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.031311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.205 [2024-11-27 06:16:20.031330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.031353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.205 [2024-11-27 06:16:20.031371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.031394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.205 [2024-11-27 06:16:20.031412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.031436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.205 [2024-11-27 06:16:20.031454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.031477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.205 [2024-11-27 06:16:20.031495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.031517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.205 [2024-11-27 06:16:20.031534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.031557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.205 [2024-11-27 06:16:20.031574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.031596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.205 [2024-11-27 06:16:20.031614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.031637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.205 [2024-11-27 06:16:20.031654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.031677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.205 [2024-11-27 06:16:20.031694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.031717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.205 [2024-11-27 06:16:20.031734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.031757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.205 [2024-11-27 06:16:20.031784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.031809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.205 [2024-11-27 06:16:20.031827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.031850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.205 [2024-11-27 06:16:20.031867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.031890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.205 [2024-11-27 06:16:20.031908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.031933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.205 [2024-11-27 06:16:20.031952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.031974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.205 [2024-11-27 06:16:20.031992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.032016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.205 [2024-11-27 06:16:20.032033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.032056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.205 [2024-11-27 06:16:20.032073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.032096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.205 [2024-11-27 06:16:20.032113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.032151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:91168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.205 [2024-11-27 06:16:20.032172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.032195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.205 [2024-11-27 06:16:20.032212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.032235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.205 [2024-11-27 06:16:20.032253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.032275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.205 [2024-11-27 06:16:20.032293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.032328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.205 [2024-11-27 06:16:20.032347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.032370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.205 [2024-11-27 06:16:20.032388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.032411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.205 [2024-11-27 06:16:20.032428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:18.205 [2024-11-27 06:16:20.032451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.206 [2024-11-27 06:16:20.032468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:18.206 [2024-11-27 06:16:20.032492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:91096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.206 [2024-11-27 06:16:20.032509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:18.206 [2024-11-27 06:16:20.032532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:91128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.206 [2024-11-27 06:16:20.032550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:18.206 [2024-11-27 06:16:20.032574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.206 [2024-11-27 06:16:20.032592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:18.206 [2024-11-27 06:16:20.032615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.206 [2024-11-27 06:16:20.032632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:18.206 [2024-11-27 06:16:20.032654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.206 [2024-11-27 06:16:20.032671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:18.206 [2024-11-27 06:16:20.032696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.206 [2024-11-27 06:16:20.032714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:18.206 [2024-11-27 06:16:20.033908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.206 [2024-11-27 06:16:20.033939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:18.206 [2024-11-27 06:16:20.033970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.206 [2024-11-27 06:16:20.033990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:18.206 [2024-11-27 06:16:20.034030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.206 [2024-11-27 06:16:20.034050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:18.206 [2024-11-27 06:16:20.034073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.206 [2024-11-27 06:16:20.034091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:18.206 [2024-11-27 06:16:20.034114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.206 [2024-11-27 06:16:20.034149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:18.206 [2024-11-27 06:16:20.034186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.206 [2024-11-27 06:16:20.034207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:18.206 [2024-11-27 06:16:20.034230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.206 [2024-11-27 06:16:20.034247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:18.206 [2024-11-27 06:16:20.034270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.206 [2024-11-27 06:16:20.034287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:18.206 [2024-11-27 06:16:20.034311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.206 [2024-11-27 06:16:20.034328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:18.206 [2024-11-27 06:16:20.034351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.206 [2024-11-27 06:16:20.034368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:18.206 [2024-11-27 06:16:20.034391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.206 [2024-11-27 06:16:20.034409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:18.206 [2024-11-27 06:16:20.034431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.206 [2024-11-27 06:16:20.034449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:18.206 [2024-11-27 06:16:20.034474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.206 [2024-11-27 06:16:20.034491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:18.206 [2024-11-27 06:16:20.034514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.206 [2024-11-27 06:16:20.034532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:18.206 [2024-11-27 06:16:20.034588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.206 [2024-11-27 06:16:20.034624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:18.206 [2024-11-27 06:16:20.034654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.206 [2024-11-27 06:16:20.034673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:18.206 9278.88 IOPS, 36.25 MiB/s [2024-11-27T06:16:23.303Z] 9298.06 IOPS, 36.32 MiB/s [2024-11-27T06:16:23.303Z] 9310.44 IOPS, 36.37 MiB/s [2024-11-27T06:16:23.303Z] Received shutdown signal, test time was about 34.083492 seconds 00:21:18.206 00:21:18.206 Latency(us) 00:21:18.206 [2024-11-27T06:16:23.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.206 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:18.206 Verification LBA range: start 0x0 length 0x4000 00:21:18.206 Nvme0n1 : 34.08 9309.21 36.36 0.00 0.00 13720.49 301.61 4026531.84 00:21:18.206 [2024-11-27T06:16:23.303Z] =================================================================================================================== 00:21:18.206 [2024-11-27T06:16:23.303Z] Total : 9309.21 36.36 0.00 0.00 13720.49 301.61 4026531.84 00:21:18.206 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:18.465 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:21:18.465 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:18.465 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:21:18.465 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:18.465 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:21:18.465 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:18.465 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:21:18.465 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:18.465 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:18.465 rmmod nvme_tcp 00:21:18.465 rmmod nvme_fabrics 00:21:18.465 rmmod nvme_keyring 00:21:18.465 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:18.465 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:21:18.465 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:21:18.465 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76726 ']' 00:21:18.465 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76726 00:21:18.465 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76726 ']' 00:21:18.465 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76726 00:21:18.465 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:21:18.465 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:18.465 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76726 00:21:18.465 killing process with pid 76726 00:21:18.465 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:18.465 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:18.465 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76726' 00:21:18.465 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76726 00:21:18.465 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76726 00:21:18.724 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:18.724 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:18.724 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:18.724 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:21:18.724 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:21:18.724 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:21:18.724 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:18.724 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:18.724 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:18.724 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:18.724 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:18.724 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:18.724 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:18.724 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:18.724 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:18.724 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:18.724 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:18.724 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:18.983 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:18.983 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:18.983 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:18.983 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:18.983 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:18.983 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.983 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:18.983 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.983 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:21:18.983 00:21:18.983 real 0m39.881s 00:21:18.983 user 2m7.310s 00:21:18.983 sys 0m12.916s 00:21:18.983 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:18.983 06:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:18.983 ************************************ 00:21:18.983 END TEST nvmf_host_multipath_status 00:21:18.983 ************************************ 00:21:18.983 06:16:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:18.983 06:16:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:18.983 06:16:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:18.983 06:16:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.984 ************************************ 00:21:18.984 START TEST nvmf_discovery_remove_ifc 00:21:18.984 ************************************ 00:21:18.984 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:19.244 * Looking for test storage... 00:21:19.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:19.244 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:19.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.245 --rc genhtml_branch_coverage=1 00:21:19.245 --rc genhtml_function_coverage=1 00:21:19.245 --rc genhtml_legend=1 00:21:19.245 --rc geninfo_all_blocks=1 00:21:19.245 --rc geninfo_unexecuted_blocks=1 00:21:19.245 00:21:19.245 ' 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:19.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.245 --rc genhtml_branch_coverage=1 00:21:19.245 --rc genhtml_function_coverage=1 00:21:19.245 --rc genhtml_legend=1 00:21:19.245 --rc geninfo_all_blocks=1 00:21:19.245 --rc geninfo_unexecuted_blocks=1 00:21:19.245 00:21:19.245 ' 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:19.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.245 --rc genhtml_branch_coverage=1 00:21:19.245 --rc genhtml_function_coverage=1 00:21:19.245 --rc genhtml_legend=1 00:21:19.245 --rc geninfo_all_blocks=1 00:21:19.245 --rc geninfo_unexecuted_blocks=1 00:21:19.245 00:21:19.245 ' 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:19.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.245 --rc genhtml_branch_coverage=1 00:21:19.245 --rc genhtml_function_coverage=1 00:21:19.245 --rc genhtml_legend=1 00:21:19.245 --rc geninfo_all_blocks=1 00:21:19.245 --rc geninfo_unexecuted_blocks=1 00:21:19.245 00:21:19.245 ' 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:19.245 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:19.245 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.246 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:19.246 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:19.246 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:19.246 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:19.246 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:19.246 Cannot find device "nvmf_init_br" 00:21:19.246 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:21:19.246 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:19.246 Cannot find device "nvmf_init_br2" 00:21:19.246 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:21:19.246 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:19.246 Cannot find device "nvmf_tgt_br" 00:21:19.246 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:21:19.246 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:19.246 Cannot find device "nvmf_tgt_br2" 00:21:19.246 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:21:19.246 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:19.246 Cannot find device "nvmf_init_br" 00:21:19.246 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:21:19.246 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:19.505 Cannot find device "nvmf_init_br2" 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:19.505 Cannot find device "nvmf_tgt_br" 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:19.505 Cannot find device "nvmf_tgt_br2" 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:19.505 Cannot find device "nvmf_br" 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:19.505 Cannot find device "nvmf_init_if" 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:19.505 Cannot find device "nvmf_init_if2" 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:19.505 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:19.505 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:19.505 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:19.506 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:19.506 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:19.765 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:19.765 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:21:19.765 00:21:19.765 --- 10.0.0.3 ping statistics --- 00:21:19.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.765 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:19.765 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:19.765 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:21:19.765 00:21:19.765 --- 10.0.0.4 ping statistics --- 00:21:19.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.765 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:19.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:21:19.765 00:21:19.765 --- 10.0.0.1 ping statistics --- 00:21:19.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.765 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:19.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:21:19.765 00:21:19.765 --- 10.0.0.2 ping statistics --- 00:21:19.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.765 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77620 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77620 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77620 ']' 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.765 06:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:19.765 [2024-11-27 06:16:24.745429] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:21:19.765 [2024-11-27 06:16:24.745527] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.024 [2024-11-27 06:16:24.899510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.024 [2024-11-27 06:16:24.955737] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.024 [2024-11-27 06:16:24.955795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.024 [2024-11-27 06:16:24.955811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.024 [2024-11-27 06:16:24.955821] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.024 [2024-11-27 06:16:24.955830] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.024 [2024-11-27 06:16:24.956324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.024 [2024-11-27 06:16:25.013296] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:20.024 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:20.024 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:21:20.024 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:20.024 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:20.024 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:20.283 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.283 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:21:20.283 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.283 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:20.283 [2024-11-27 06:16:25.144616] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.283 [2024-11-27 06:16:25.152769] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:21:20.283 null0 00:21:20.283 [2024-11-27 06:16:25.184693] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:20.283 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.283 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77644 00:21:20.283 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:21:20.283 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77644 /tmp/host.sock 00:21:20.283 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77644 ']' 00:21:20.283 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:21:20.283 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:20.283 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:20.284 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:20.284 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:20.284 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:20.284 [2024-11-27 06:16:25.269564] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:21:20.284 [2024-11-27 06:16:25.269670] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77644 ] 00:21:20.542 [2024-11-27 06:16:25.421973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.542 [2024-11-27 06:16:25.492438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.542 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:20.542 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:21:20.542 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:20.542 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:21:20.542 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.542 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:20.542 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.542 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:21:20.542 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.542 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:20.542 [2024-11-27 06:16:25.599401] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:20.801 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.801 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:21:20.801 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.801 06:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:21.737 [2024-11-27 06:16:26.657726] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:21.737 [2024-11-27 06:16:26.657767] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:21.737 [2024-11-27 06:16:26.657790] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:21.737 [2024-11-27 06:16:26.663769] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:21:21.737 [2024-11-27 06:16:26.718256] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:21:21.737 [2024-11-27 06:16:26.720524] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x16c6000:1 started. 00:21:21.737 [2024-11-27 06:16:26.722298] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:21.737 [2024-11-27 06:16:26.722356] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:21.737 [2024-11-27 06:16:26.722385] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:21.737 [2024-11-27 06:16:26.722402] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:21:21.737 [2024-11-27 06:16:26.722429] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:21.737 06:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.737 06:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:21:21.737 06:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:21.737 [2024-11-27 06:16:26.726496] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x16c6000 was disconnected and freed. delete nvme_qpair. 00:21:21.737 06:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:21.737 06:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:21.737 06:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:21.737 06:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.737 06:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:21.737 06:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:21.737 06:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.737 06:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:21:21.737 06:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:21:21.737 06:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:21:21.737 06:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:21:21.737 06:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:21.737 06:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:21.737 06:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.737 06:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:21.737 06:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:21.737 06:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:21.737 06:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:21.737 06:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.995 06:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:21.995 06:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:22.936 06:16:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:22.936 06:16:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:22.936 06:16:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:22.936 06:16:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:22.936 06:16:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:22.936 06:16:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.936 06:16:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:22.936 06:16:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.936 06:16:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:22.936 06:16:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:23.872 06:16:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:23.872 06:16:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:23.872 06:16:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:23.872 06:16:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.872 06:16:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:23.872 06:16:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:23.872 06:16:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:23.872 06:16:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.131 06:16:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:24.131 06:16:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:25.067 06:16:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:25.067 06:16:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:25.067 06:16:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:25.067 06:16:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.067 06:16:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:25.067 06:16:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:25.067 06:16:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:25.067 06:16:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.067 06:16:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:25.067 06:16:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:26.019 06:16:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:26.019 06:16:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:26.019 06:16:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:26.019 06:16:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.019 06:16:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:26.019 06:16:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:26.019 06:16:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:26.019 06:16:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.343 06:16:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:26.344 06:16:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:27.304 06:16:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:27.304 06:16:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:27.304 06:16:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:27.304 06:16:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.304 06:16:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:27.304 06:16:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:27.304 06:16:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:27.304 06:16:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.304 [2024-11-27 06:16:32.148936] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:21:27.304 [2024-11-27 06:16:32.149021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.304 [2024-11-27 06:16:32.149038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.304 [2024-11-27 06:16:32.149051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.304 [2024-11-27 06:16:32.149076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.304 [2024-11-27 06:16:32.149086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.304 [2024-11-27 06:16:32.149095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.304 [2024-11-27 06:16:32.149105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.304 [2024-11-27 06:16:32.149114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.304 [2024-11-27 06:16:32.149124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.304 [2024-11-27 06:16:32.149133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.304 [2024-11-27 06:16:32.149142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2250 is same with the state(6) to be set 00:21:27.304 [2024-11-27 06:16:32.158931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a2250 (9): Bad file descriptor 00:21:27.304 06:16:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:27.304 [2024-11-27 06:16:32.168953] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:27.304 [2024-11-27 06:16:32.168970] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:27.304 [2024-11-27 06:16:32.168976] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:27.304 [2024-11-27 06:16:32.168982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:27.304 06:16:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:27.304 [2024-11-27 06:16:32.169038] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:28.241 06:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:28.241 06:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:28.241 06:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.241 06:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:28.241 06:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:28.241 06:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:28.241 06:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:28.241 [2024-11-27 06:16:33.192279] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:21:28.241 [2024-11-27 06:16:33.192779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a2250 with addr=10.0.0.3, port=4420 00:21:28.241 [2024-11-27 06:16:33.193033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2250 is same with the state(6) to be set 00:21:28.241 [2024-11-27 06:16:33.193115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a2250 (9): Bad file descriptor 00:21:28.241 [2024-11-27 06:16:33.194106] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:21:28.241 [2024-11-27 06:16:33.194248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:28.241 [2024-11-27 06:16:33.194275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:28.241 [2024-11-27 06:16:33.194300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:28.241 [2024-11-27 06:16:33.194319] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:28.241 [2024-11-27 06:16:33.194333] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:28.241 [2024-11-27 06:16:33.194345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:28.241 [2024-11-27 06:16:33.194365] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:28.241 [2024-11-27 06:16:33.194378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:28.241 06:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.241 06:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:28.241 06:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:29.178 [2024-11-27 06:16:34.194453] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:29.178 [2024-11-27 06:16:34.194902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:29.178 [2024-11-27 06:16:34.194963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:29.178 [2024-11-27 06:16:34.194991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:29.178 [2024-11-27 06:16:34.195009] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:21:29.178 [2024-11-27 06:16:34.195021] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:29.178 [2024-11-27 06:16:34.195028] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:29.178 [2024-11-27 06:16:34.195033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:29.178 [2024-11-27 06:16:34.195071] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:21:29.178 [2024-11-27 06:16:34.195127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.178 [2024-11-27 06:16:34.195163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.178 [2024-11-27 06:16:34.195180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.178 [2024-11-27 06:16:34.195190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.178 [2024-11-27 06:16:34.195203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.178 [2024-11-27 06:16:34.195212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.178 [2024-11-27 06:16:34.195223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.178 [2024-11-27 06:16:34.195248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.178 [2024-11-27 06:16:34.195258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.178 [2024-11-27 06:16:34.195266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.178 [2024-11-27 06:16:34.195276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:21:29.178 [2024-11-27 06:16:34.195337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x162da20 (9): Bad file descriptor 00:21:29.178 [2024-11-27 06:16:34.196329] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:21:29.178 [2024-11-27 06:16:34.196345] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:21:29.178 06:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:29.178 06:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:29.178 06:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.178 06:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:29.178 06:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:29.178 06:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:29.178 06:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:29.178 06:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.437 06:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:21:29.437 06:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:29.437 06:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:29.437 06:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:21:29.437 06:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:29.437 06:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:29.437 06:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:29.437 06:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:29.437 06:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.437 06:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:29.437 06:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:29.437 06:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.437 06:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:29.437 06:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:30.373 06:16:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:30.373 06:16:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:30.373 06:16:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:30.373 06:16:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.373 06:16:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:30.373 06:16:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:30.373 06:16:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:30.373 06:16:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.373 06:16:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:30.373 06:16:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:31.310 [2024-11-27 06:16:36.202136] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:31.310 [2024-11-27 06:16:36.202193] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:31.310 [2024-11-27 06:16:36.202216] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:31.310 [2024-11-27 06:16:36.207173] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:21:31.310 [2024-11-27 06:16:36.261733] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:21:31.310 [2024-11-27 06:16:36.262851] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x16add80:1 started. 00:21:31.310 [2024-11-27 06:16:36.264366] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:31.310 [2024-11-27 06:16:36.264562] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:31.310 [2024-11-27 06:16:36.264756] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:31.310 [2024-11-27 06:16:36.264787] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:21:31.310 [2024-11-27 06:16:36.264797] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:31.310 [2024-11-27 06:16:36.271081] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x16add80 was disconnected and freed. delete nvme_qpair. 00:21:31.570 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:31.570 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:31.570 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:31.570 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.570 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:31.570 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:31.570 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:31.570 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.570 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:21:31.570 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:21:31.570 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77644 00:21:31.570 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77644 ']' 00:21:31.570 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77644 00:21:31.570 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:21:31.570 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.570 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77644 00:21:31.570 killing process with pid 77644 00:21:31.570 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:31.570 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:31.570 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77644' 00:21:31.570 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77644 00:21:31.570 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77644 00:21:31.829 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:21:31.829 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:31.829 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:21:31.829 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:31.829 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:21:31.829 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:31.829 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:31.829 rmmod nvme_tcp 00:21:31.829 rmmod nvme_fabrics 00:21:31.829 rmmod nvme_keyring 00:21:31.829 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:31.829 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:21:31.829 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:21:31.829 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77620 ']' 00:21:31.829 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77620 00:21:31.829 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77620 ']' 00:21:31.829 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77620 00:21:31.829 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:21:31.829 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.829 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77620 00:21:31.829 killing process with pid 77620 00:21:31.829 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:31.829 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:31.829 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77620' 00:21:31.829 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77620 00:21:31.829 06:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77620 00:21:32.089 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:32.089 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:32.089 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:32.089 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:21:32.089 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:32.089 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:21:32.089 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:21:32.089 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:32.089 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:32.089 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:32.089 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:32.089 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:32.089 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:32.089 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:32.089 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:32.089 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:32.089 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:32.089 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:32.349 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:32.349 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:32.349 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:32.349 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:32.349 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:32.349 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.349 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:32.349 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.349 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:21:32.349 00:21:32.349 real 0m13.320s 00:21:32.349 user 0m22.410s 00:21:32.349 sys 0m2.596s 00:21:32.349 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:32.349 06:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:32.349 ************************************ 00:21:32.349 END TEST nvmf_discovery_remove_ifc 00:21:32.349 ************************************ 00:21:32.349 06:16:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:32.349 06:16:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:32.349 06:16:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:32.349 06:16:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.349 ************************************ 00:21:32.349 START TEST nvmf_identify_kernel_target 00:21:32.349 ************************************ 00:21:32.349 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:32.609 * Looking for test storage... 00:21:32.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:32.609 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:32.609 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:32.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.610 --rc genhtml_branch_coverage=1 00:21:32.610 --rc genhtml_function_coverage=1 00:21:32.610 --rc genhtml_legend=1 00:21:32.610 --rc geninfo_all_blocks=1 00:21:32.610 --rc geninfo_unexecuted_blocks=1 00:21:32.610 00:21:32.610 ' 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:32.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.610 --rc genhtml_branch_coverage=1 00:21:32.610 --rc genhtml_function_coverage=1 00:21:32.610 --rc genhtml_legend=1 00:21:32.610 --rc geninfo_all_blocks=1 00:21:32.610 --rc geninfo_unexecuted_blocks=1 00:21:32.610 00:21:32.610 ' 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:32.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.610 --rc genhtml_branch_coverage=1 00:21:32.610 --rc genhtml_function_coverage=1 00:21:32.610 --rc genhtml_legend=1 00:21:32.610 --rc geninfo_all_blocks=1 00:21:32.610 --rc geninfo_unexecuted_blocks=1 00:21:32.610 00:21:32.610 ' 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:32.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.610 --rc genhtml_branch_coverage=1 00:21:32.610 --rc genhtml_function_coverage=1 00:21:32.610 --rc genhtml_legend=1 00:21:32.610 --rc geninfo_all_blocks=1 00:21:32.610 --rc geninfo_unexecuted_blocks=1 00:21:32.610 00:21:32.610 ' 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:32.610 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:32.611 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:32.611 Cannot find device "nvmf_init_br" 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:32.611 Cannot find device "nvmf_init_br2" 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:32.611 Cannot find device "nvmf_tgt_br" 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:32.611 Cannot find device "nvmf_tgt_br2" 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:32.611 Cannot find device "nvmf_init_br" 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:21:32.611 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:32.870 Cannot find device "nvmf_init_br2" 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:32.870 Cannot find device "nvmf_tgt_br" 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:32.870 Cannot find device "nvmf_tgt_br2" 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:32.870 Cannot find device "nvmf_br" 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:32.870 Cannot find device "nvmf_init_if" 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:32.870 Cannot find device "nvmf_init_if2" 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:32.870 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:32.870 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:32.870 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:33.130 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:33.130 06:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:33.130 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:33.130 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.126 ms 00:21:33.130 00:21:33.130 --- 10.0.0.3 ping statistics --- 00:21:33.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.130 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:33.130 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:33.130 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:21:33.130 00:21:33.130 --- 10.0.0.4 ping statistics --- 00:21:33.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.130 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:33.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:21:33.130 00:21:33.130 --- 10.0.0.1 ping statistics --- 00:21:33.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.130 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:33.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:21:33.130 00:21:33.130 --- 10.0.0.2 ping statistics --- 00:21:33.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.130 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:33.130 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:33.389 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:33.647 Waiting for block devices as requested 00:21:33.647 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:33.647 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:33.647 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:33.647 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:33.647 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:33.647 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:21:33.648 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:33.648 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:33.648 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:33.648 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:33.648 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:33.907 No valid GPT data, bailing 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:33.907 No valid GPT data, bailing 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:33.907 No valid GPT data, bailing 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:33.907 06:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:34.166 No valid GPT data, bailing 00:21:34.166 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:34.166 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:34.166 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:34.166 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:21:34.166 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:21:34.166 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:34.166 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:34.166 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:34.166 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:34.166 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:21:34.166 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:21:34.166 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:21:34.166 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:21:34.166 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:21:34.166 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:21:34.166 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:21:34.167 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:34.167 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid=34bde053-797d-42f4-ad97-2a3b315837d0 -a 10.0.0.1 -t tcp -s 4420 00:21:34.167 00:21:34.167 Discovery Log Number of Records 2, Generation counter 2 00:21:34.167 =====Discovery Log Entry 0====== 00:21:34.167 trtype: tcp 00:21:34.167 adrfam: ipv4 00:21:34.167 subtype: current discovery subsystem 00:21:34.167 treq: not specified, sq flow control disable supported 00:21:34.167 portid: 1 00:21:34.167 trsvcid: 4420 00:21:34.167 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:34.167 traddr: 10.0.0.1 00:21:34.167 eflags: none 00:21:34.167 sectype: none 00:21:34.167 =====Discovery Log Entry 1====== 00:21:34.167 trtype: tcp 00:21:34.167 adrfam: ipv4 00:21:34.167 subtype: nvme subsystem 00:21:34.167 treq: not specified, sq flow control disable supported 00:21:34.167 portid: 1 00:21:34.167 trsvcid: 4420 00:21:34.167 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:34.167 traddr: 10.0.0.1 00:21:34.167 eflags: none 00:21:34.167 sectype: none 00:21:34.167 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:21:34.167 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:21:34.426 ===================================================== 00:21:34.426 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:34.426 ===================================================== 00:21:34.426 Controller Capabilities/Features 00:21:34.426 ================================ 00:21:34.426 Vendor ID: 0000 00:21:34.426 Subsystem Vendor ID: 0000 00:21:34.426 Serial Number: 088e6483bbc55feac3b6 00:21:34.426 Model Number: Linux 00:21:34.426 Firmware Version: 6.8.9-20 00:21:34.426 Recommended Arb Burst: 0 00:21:34.426 IEEE OUI Identifier: 00 00 00 00:21:34.426 Multi-path I/O 00:21:34.426 May have multiple subsystem ports: No 00:21:34.426 May have multiple controllers: No 00:21:34.426 Associated with SR-IOV VF: No 00:21:34.426 Max Data Transfer Size: Unlimited 00:21:34.426 Max Number of Namespaces: 0 00:21:34.426 Max Number of I/O Queues: 1024 00:21:34.426 NVMe Specification Version (VS): 1.3 00:21:34.426 NVMe Specification Version (Identify): 1.3 00:21:34.426 Maximum Queue Entries: 1024 00:21:34.426 Contiguous Queues Required: No 00:21:34.426 Arbitration Mechanisms Supported 00:21:34.426 Weighted Round Robin: Not Supported 00:21:34.426 Vendor Specific: Not Supported 00:21:34.426 Reset Timeout: 7500 ms 00:21:34.426 Doorbell Stride: 4 bytes 00:21:34.426 NVM Subsystem Reset: Not Supported 00:21:34.426 Command Sets Supported 00:21:34.426 NVM Command Set: Supported 00:21:34.426 Boot Partition: Not Supported 00:21:34.426 Memory Page Size Minimum: 4096 bytes 00:21:34.426 Memory Page Size Maximum: 4096 bytes 00:21:34.426 Persistent Memory Region: Not Supported 00:21:34.426 Optional Asynchronous Events Supported 00:21:34.426 Namespace Attribute Notices: Not Supported 00:21:34.426 Firmware Activation Notices: Not Supported 00:21:34.426 ANA Change Notices: Not Supported 00:21:34.426 PLE Aggregate Log Change Notices: Not Supported 00:21:34.426 LBA Status Info Alert Notices: Not Supported 00:21:34.426 EGE Aggregate Log Change Notices: Not Supported 00:21:34.426 Normal NVM Subsystem Shutdown event: Not Supported 00:21:34.426 Zone Descriptor Change Notices: Not Supported 00:21:34.426 Discovery Log Change Notices: Supported 00:21:34.426 Controller Attributes 00:21:34.426 128-bit Host Identifier: Not Supported 00:21:34.426 Non-Operational Permissive Mode: Not Supported 00:21:34.426 NVM Sets: Not Supported 00:21:34.426 Read Recovery Levels: Not Supported 00:21:34.426 Endurance Groups: Not Supported 00:21:34.426 Predictable Latency Mode: Not Supported 00:21:34.426 Traffic Based Keep ALive: Not Supported 00:21:34.426 Namespace Granularity: Not Supported 00:21:34.426 SQ Associations: Not Supported 00:21:34.426 UUID List: Not Supported 00:21:34.426 Multi-Domain Subsystem: Not Supported 00:21:34.426 Fixed Capacity Management: Not Supported 00:21:34.426 Variable Capacity Management: Not Supported 00:21:34.426 Delete Endurance Group: Not Supported 00:21:34.426 Delete NVM Set: Not Supported 00:21:34.426 Extended LBA Formats Supported: Not Supported 00:21:34.426 Flexible Data Placement Supported: Not Supported 00:21:34.426 00:21:34.426 Controller Memory Buffer Support 00:21:34.426 ================================ 00:21:34.426 Supported: No 00:21:34.426 00:21:34.426 Persistent Memory Region Support 00:21:34.426 ================================ 00:21:34.426 Supported: No 00:21:34.426 00:21:34.426 Admin Command Set Attributes 00:21:34.426 ============================ 00:21:34.426 Security Send/Receive: Not Supported 00:21:34.426 Format NVM: Not Supported 00:21:34.426 Firmware Activate/Download: Not Supported 00:21:34.426 Namespace Management: Not Supported 00:21:34.427 Device Self-Test: Not Supported 00:21:34.427 Directives: Not Supported 00:21:34.427 NVMe-MI: Not Supported 00:21:34.427 Virtualization Management: Not Supported 00:21:34.427 Doorbell Buffer Config: Not Supported 00:21:34.427 Get LBA Status Capability: Not Supported 00:21:34.427 Command & Feature Lockdown Capability: Not Supported 00:21:34.427 Abort Command Limit: 1 00:21:34.427 Async Event Request Limit: 1 00:21:34.427 Number of Firmware Slots: N/A 00:21:34.427 Firmware Slot 1 Read-Only: N/A 00:21:34.427 Firmware Activation Without Reset: N/A 00:21:34.427 Multiple Update Detection Support: N/A 00:21:34.427 Firmware Update Granularity: No Information Provided 00:21:34.427 Per-Namespace SMART Log: No 00:21:34.427 Asymmetric Namespace Access Log Page: Not Supported 00:21:34.427 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:34.427 Command Effects Log Page: Not Supported 00:21:34.427 Get Log Page Extended Data: Supported 00:21:34.427 Telemetry Log Pages: Not Supported 00:21:34.427 Persistent Event Log Pages: Not Supported 00:21:34.427 Supported Log Pages Log Page: May Support 00:21:34.427 Commands Supported & Effects Log Page: Not Supported 00:21:34.427 Feature Identifiers & Effects Log Page:May Support 00:21:34.427 NVMe-MI Commands & Effects Log Page: May Support 00:21:34.427 Data Area 4 for Telemetry Log: Not Supported 00:21:34.427 Error Log Page Entries Supported: 1 00:21:34.427 Keep Alive: Not Supported 00:21:34.427 00:21:34.427 NVM Command Set Attributes 00:21:34.427 ========================== 00:21:34.427 Submission Queue Entry Size 00:21:34.427 Max: 1 00:21:34.427 Min: 1 00:21:34.427 Completion Queue Entry Size 00:21:34.427 Max: 1 00:21:34.427 Min: 1 00:21:34.427 Number of Namespaces: 0 00:21:34.427 Compare Command: Not Supported 00:21:34.427 Write Uncorrectable Command: Not Supported 00:21:34.427 Dataset Management Command: Not Supported 00:21:34.427 Write Zeroes Command: Not Supported 00:21:34.427 Set Features Save Field: Not Supported 00:21:34.427 Reservations: Not Supported 00:21:34.427 Timestamp: Not Supported 00:21:34.427 Copy: Not Supported 00:21:34.427 Volatile Write Cache: Not Present 00:21:34.427 Atomic Write Unit (Normal): 1 00:21:34.427 Atomic Write Unit (PFail): 1 00:21:34.427 Atomic Compare & Write Unit: 1 00:21:34.427 Fused Compare & Write: Not Supported 00:21:34.427 Scatter-Gather List 00:21:34.427 SGL Command Set: Supported 00:21:34.427 SGL Keyed: Not Supported 00:21:34.427 SGL Bit Bucket Descriptor: Not Supported 00:21:34.427 SGL Metadata Pointer: Not Supported 00:21:34.427 Oversized SGL: Not Supported 00:21:34.427 SGL Metadata Address: Not Supported 00:21:34.427 SGL Offset: Supported 00:21:34.427 Transport SGL Data Block: Not Supported 00:21:34.427 Replay Protected Memory Block: Not Supported 00:21:34.427 00:21:34.427 Firmware Slot Information 00:21:34.427 ========================= 00:21:34.427 Active slot: 0 00:21:34.427 00:21:34.427 00:21:34.427 Error Log 00:21:34.427 ========= 00:21:34.427 00:21:34.427 Active Namespaces 00:21:34.427 ================= 00:21:34.427 Discovery Log Page 00:21:34.427 ================== 00:21:34.427 Generation Counter: 2 00:21:34.427 Number of Records: 2 00:21:34.427 Record Format: 0 00:21:34.427 00:21:34.427 Discovery Log Entry 0 00:21:34.427 ---------------------- 00:21:34.427 Transport Type: 3 (TCP) 00:21:34.427 Address Family: 1 (IPv4) 00:21:34.427 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:34.427 Entry Flags: 00:21:34.427 Duplicate Returned Information: 0 00:21:34.427 Explicit Persistent Connection Support for Discovery: 0 00:21:34.427 Transport Requirements: 00:21:34.427 Secure Channel: Not Specified 00:21:34.427 Port ID: 1 (0x0001) 00:21:34.427 Controller ID: 65535 (0xffff) 00:21:34.427 Admin Max SQ Size: 32 00:21:34.427 Transport Service Identifier: 4420 00:21:34.427 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:34.427 Transport Address: 10.0.0.1 00:21:34.427 Discovery Log Entry 1 00:21:34.427 ---------------------- 00:21:34.427 Transport Type: 3 (TCP) 00:21:34.427 Address Family: 1 (IPv4) 00:21:34.427 Subsystem Type: 2 (NVM Subsystem) 00:21:34.427 Entry Flags: 00:21:34.427 Duplicate Returned Information: 0 00:21:34.427 Explicit Persistent Connection Support for Discovery: 0 00:21:34.427 Transport Requirements: 00:21:34.427 Secure Channel: Not Specified 00:21:34.427 Port ID: 1 (0x0001) 00:21:34.427 Controller ID: 65535 (0xffff) 00:21:34.427 Admin Max SQ Size: 32 00:21:34.427 Transport Service Identifier: 4420 00:21:34.427 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:21:34.427 Transport Address: 10.0.0.1 00:21:34.427 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:34.427 get_feature(0x01) failed 00:21:34.427 get_feature(0x02) failed 00:21:34.427 get_feature(0x04) failed 00:21:34.427 ===================================================== 00:21:34.427 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:34.427 ===================================================== 00:21:34.427 Controller Capabilities/Features 00:21:34.427 ================================ 00:21:34.427 Vendor ID: 0000 00:21:34.427 Subsystem Vendor ID: 0000 00:21:34.427 Serial Number: 5542fa8ce22805a342b7 00:21:34.427 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:21:34.427 Firmware Version: 6.8.9-20 00:21:34.427 Recommended Arb Burst: 6 00:21:34.427 IEEE OUI Identifier: 00 00 00 00:21:34.427 Multi-path I/O 00:21:34.427 May have multiple subsystem ports: Yes 00:21:34.427 May have multiple controllers: Yes 00:21:34.427 Associated with SR-IOV VF: No 00:21:34.427 Max Data Transfer Size: Unlimited 00:21:34.427 Max Number of Namespaces: 1024 00:21:34.427 Max Number of I/O Queues: 128 00:21:34.427 NVMe Specification Version (VS): 1.3 00:21:34.427 NVMe Specification Version (Identify): 1.3 00:21:34.427 Maximum Queue Entries: 1024 00:21:34.427 Contiguous Queues Required: No 00:21:34.427 Arbitration Mechanisms Supported 00:21:34.427 Weighted Round Robin: Not Supported 00:21:34.427 Vendor Specific: Not Supported 00:21:34.427 Reset Timeout: 7500 ms 00:21:34.427 Doorbell Stride: 4 bytes 00:21:34.427 NVM Subsystem Reset: Not Supported 00:21:34.427 Command Sets Supported 00:21:34.427 NVM Command Set: Supported 00:21:34.427 Boot Partition: Not Supported 00:21:34.427 Memory Page Size Minimum: 4096 bytes 00:21:34.427 Memory Page Size Maximum: 4096 bytes 00:21:34.427 Persistent Memory Region: Not Supported 00:21:34.427 Optional Asynchronous Events Supported 00:21:34.427 Namespace Attribute Notices: Supported 00:21:34.427 Firmware Activation Notices: Not Supported 00:21:34.427 ANA Change Notices: Supported 00:21:34.427 PLE Aggregate Log Change Notices: Not Supported 00:21:34.427 LBA Status Info Alert Notices: Not Supported 00:21:34.427 EGE Aggregate Log Change Notices: Not Supported 00:21:34.427 Normal NVM Subsystem Shutdown event: Not Supported 00:21:34.427 Zone Descriptor Change Notices: Not Supported 00:21:34.427 Discovery Log Change Notices: Not Supported 00:21:34.427 Controller Attributes 00:21:34.427 128-bit Host Identifier: Supported 00:21:34.427 Non-Operational Permissive Mode: Not Supported 00:21:34.427 NVM Sets: Not Supported 00:21:34.427 Read Recovery Levels: Not Supported 00:21:34.427 Endurance Groups: Not Supported 00:21:34.427 Predictable Latency Mode: Not Supported 00:21:34.427 Traffic Based Keep ALive: Supported 00:21:34.427 Namespace Granularity: Not Supported 00:21:34.427 SQ Associations: Not Supported 00:21:34.427 UUID List: Not Supported 00:21:34.427 Multi-Domain Subsystem: Not Supported 00:21:34.427 Fixed Capacity Management: Not Supported 00:21:34.427 Variable Capacity Management: Not Supported 00:21:34.427 Delete Endurance Group: Not Supported 00:21:34.427 Delete NVM Set: Not Supported 00:21:34.427 Extended LBA Formats Supported: Not Supported 00:21:34.427 Flexible Data Placement Supported: Not Supported 00:21:34.427 00:21:34.427 Controller Memory Buffer Support 00:21:34.427 ================================ 00:21:34.427 Supported: No 00:21:34.427 00:21:34.427 Persistent Memory Region Support 00:21:34.427 ================================ 00:21:34.427 Supported: No 00:21:34.427 00:21:34.427 Admin Command Set Attributes 00:21:34.427 ============================ 00:21:34.427 Security Send/Receive: Not Supported 00:21:34.427 Format NVM: Not Supported 00:21:34.427 Firmware Activate/Download: Not Supported 00:21:34.427 Namespace Management: Not Supported 00:21:34.427 Device Self-Test: Not Supported 00:21:34.427 Directives: Not Supported 00:21:34.427 NVMe-MI: Not Supported 00:21:34.427 Virtualization Management: Not Supported 00:21:34.428 Doorbell Buffer Config: Not Supported 00:21:34.428 Get LBA Status Capability: Not Supported 00:21:34.428 Command & Feature Lockdown Capability: Not Supported 00:21:34.428 Abort Command Limit: 4 00:21:34.428 Async Event Request Limit: 4 00:21:34.428 Number of Firmware Slots: N/A 00:21:34.428 Firmware Slot 1 Read-Only: N/A 00:21:34.428 Firmware Activation Without Reset: N/A 00:21:34.428 Multiple Update Detection Support: N/A 00:21:34.428 Firmware Update Granularity: No Information Provided 00:21:34.428 Per-Namespace SMART Log: Yes 00:21:34.428 Asymmetric Namespace Access Log Page: Supported 00:21:34.428 ANA Transition Time : 10 sec 00:21:34.428 00:21:34.428 Asymmetric Namespace Access Capabilities 00:21:34.428 ANA Optimized State : Supported 00:21:34.428 ANA Non-Optimized State : Supported 00:21:34.428 ANA Inaccessible State : Supported 00:21:34.428 ANA Persistent Loss State : Supported 00:21:34.428 ANA Change State : Supported 00:21:34.428 ANAGRPID is not changed : No 00:21:34.428 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:21:34.428 00:21:34.428 ANA Group Identifier Maximum : 128 00:21:34.428 Number of ANA Group Identifiers : 128 00:21:34.428 Max Number of Allowed Namespaces : 1024 00:21:34.428 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:21:34.428 Command Effects Log Page: Supported 00:21:34.428 Get Log Page Extended Data: Supported 00:21:34.428 Telemetry Log Pages: Not Supported 00:21:34.428 Persistent Event Log Pages: Not Supported 00:21:34.428 Supported Log Pages Log Page: May Support 00:21:34.428 Commands Supported & Effects Log Page: Not Supported 00:21:34.428 Feature Identifiers & Effects Log Page:May Support 00:21:34.428 NVMe-MI Commands & Effects Log Page: May Support 00:21:34.428 Data Area 4 for Telemetry Log: Not Supported 00:21:34.428 Error Log Page Entries Supported: 128 00:21:34.428 Keep Alive: Supported 00:21:34.428 Keep Alive Granularity: 1000 ms 00:21:34.428 00:21:34.428 NVM Command Set Attributes 00:21:34.428 ========================== 00:21:34.428 Submission Queue Entry Size 00:21:34.428 Max: 64 00:21:34.428 Min: 64 00:21:34.428 Completion Queue Entry Size 00:21:34.428 Max: 16 00:21:34.428 Min: 16 00:21:34.428 Number of Namespaces: 1024 00:21:34.428 Compare Command: Not Supported 00:21:34.428 Write Uncorrectable Command: Not Supported 00:21:34.428 Dataset Management Command: Supported 00:21:34.428 Write Zeroes Command: Supported 00:21:34.428 Set Features Save Field: Not Supported 00:21:34.428 Reservations: Not Supported 00:21:34.428 Timestamp: Not Supported 00:21:34.428 Copy: Not Supported 00:21:34.428 Volatile Write Cache: Present 00:21:34.428 Atomic Write Unit (Normal): 1 00:21:34.428 Atomic Write Unit (PFail): 1 00:21:34.428 Atomic Compare & Write Unit: 1 00:21:34.428 Fused Compare & Write: Not Supported 00:21:34.428 Scatter-Gather List 00:21:34.428 SGL Command Set: Supported 00:21:34.428 SGL Keyed: Not Supported 00:21:34.428 SGL Bit Bucket Descriptor: Not Supported 00:21:34.428 SGL Metadata Pointer: Not Supported 00:21:34.428 Oversized SGL: Not Supported 00:21:34.428 SGL Metadata Address: Not Supported 00:21:34.428 SGL Offset: Supported 00:21:34.428 Transport SGL Data Block: Not Supported 00:21:34.428 Replay Protected Memory Block: Not Supported 00:21:34.428 00:21:34.428 Firmware Slot Information 00:21:34.428 ========================= 00:21:34.428 Active slot: 0 00:21:34.428 00:21:34.428 Asymmetric Namespace Access 00:21:34.428 =========================== 00:21:34.428 Change Count : 0 00:21:34.428 Number of ANA Group Descriptors : 1 00:21:34.428 ANA Group Descriptor : 0 00:21:34.428 ANA Group ID : 1 00:21:34.428 Number of NSID Values : 1 00:21:34.428 Change Count : 0 00:21:34.428 ANA State : 1 00:21:34.428 Namespace Identifier : 1 00:21:34.428 00:21:34.428 Commands Supported and Effects 00:21:34.428 ============================== 00:21:34.428 Admin Commands 00:21:34.428 -------------- 00:21:34.428 Get Log Page (02h): Supported 00:21:34.428 Identify (06h): Supported 00:21:34.428 Abort (08h): Supported 00:21:34.428 Set Features (09h): Supported 00:21:34.428 Get Features (0Ah): Supported 00:21:34.428 Asynchronous Event Request (0Ch): Supported 00:21:34.428 Keep Alive (18h): Supported 00:21:34.428 I/O Commands 00:21:34.428 ------------ 00:21:34.428 Flush (00h): Supported 00:21:34.428 Write (01h): Supported LBA-Change 00:21:34.428 Read (02h): Supported 00:21:34.428 Write Zeroes (08h): Supported LBA-Change 00:21:34.428 Dataset Management (09h): Supported 00:21:34.428 00:21:34.428 Error Log 00:21:34.428 ========= 00:21:34.428 Entry: 0 00:21:34.428 Error Count: 0x3 00:21:34.428 Submission Queue Id: 0x0 00:21:34.428 Command Id: 0x5 00:21:34.428 Phase Bit: 0 00:21:34.428 Status Code: 0x2 00:21:34.428 Status Code Type: 0x0 00:21:34.428 Do Not Retry: 1 00:21:34.428 Error Location: 0x28 00:21:34.428 LBA: 0x0 00:21:34.428 Namespace: 0x0 00:21:34.428 Vendor Log Page: 0x0 00:21:34.428 ----------- 00:21:34.428 Entry: 1 00:21:34.428 Error Count: 0x2 00:21:34.428 Submission Queue Id: 0x0 00:21:34.428 Command Id: 0x5 00:21:34.428 Phase Bit: 0 00:21:34.428 Status Code: 0x2 00:21:34.428 Status Code Type: 0x0 00:21:34.428 Do Not Retry: 1 00:21:34.428 Error Location: 0x28 00:21:34.428 LBA: 0x0 00:21:34.428 Namespace: 0x0 00:21:34.428 Vendor Log Page: 0x0 00:21:34.428 ----------- 00:21:34.428 Entry: 2 00:21:34.428 Error Count: 0x1 00:21:34.428 Submission Queue Id: 0x0 00:21:34.428 Command Id: 0x4 00:21:34.428 Phase Bit: 0 00:21:34.428 Status Code: 0x2 00:21:34.428 Status Code Type: 0x0 00:21:34.428 Do Not Retry: 1 00:21:34.428 Error Location: 0x28 00:21:34.428 LBA: 0x0 00:21:34.428 Namespace: 0x0 00:21:34.428 Vendor Log Page: 0x0 00:21:34.428 00:21:34.428 Number of Queues 00:21:34.428 ================ 00:21:34.428 Number of I/O Submission Queues: 128 00:21:34.428 Number of I/O Completion Queues: 128 00:21:34.428 00:21:34.428 ZNS Specific Controller Data 00:21:34.428 ============================ 00:21:34.428 Zone Append Size Limit: 0 00:21:34.428 00:21:34.428 00:21:34.428 Active Namespaces 00:21:34.428 ================= 00:21:34.428 get_feature(0x05) failed 00:21:34.428 Namespace ID:1 00:21:34.428 Command Set Identifier: NVM (00h) 00:21:34.428 Deallocate: Supported 00:21:34.428 Deallocated/Unwritten Error: Not Supported 00:21:34.428 Deallocated Read Value: Unknown 00:21:34.428 Deallocate in Write Zeroes: Not Supported 00:21:34.428 Deallocated Guard Field: 0xFFFF 00:21:34.428 Flush: Supported 00:21:34.428 Reservation: Not Supported 00:21:34.428 Namespace Sharing Capabilities: Multiple Controllers 00:21:34.428 Size (in LBAs): 1310720 (5GiB) 00:21:34.428 Capacity (in LBAs): 1310720 (5GiB) 00:21:34.428 Utilization (in LBAs): 1310720 (5GiB) 00:21:34.428 UUID: 1a182242-abb8-4986-bb0b-ff4c9041bda8 00:21:34.428 Thin Provisioning: Not Supported 00:21:34.428 Per-NS Atomic Units: Yes 00:21:34.428 Atomic Boundary Size (Normal): 0 00:21:34.428 Atomic Boundary Size (PFail): 0 00:21:34.428 Atomic Boundary Offset: 0 00:21:34.428 NGUID/EUI64 Never Reused: No 00:21:34.428 ANA group ID: 1 00:21:34.428 Namespace Write Protected: No 00:21:34.428 Number of LBA Formats: 1 00:21:34.428 Current LBA Format: LBA Format #00 00:21:34.428 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:21:34.428 00:21:34.428 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:21:34.428 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:34.428 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:34.688 rmmod nvme_tcp 00:21:34.688 rmmod nvme_fabrics 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:34.688 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:34.948 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:34.948 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.948 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.948 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.948 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:21:34.948 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:21:34.948 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:34.948 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:21:34.948 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:34.948 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:34.948 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:34.948 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:34.948 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:21:34.948 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:21:34.948 06:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:35.516 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:35.776 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:35.776 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:35.776 00:21:35.776 real 0m3.381s 00:21:35.776 user 0m1.182s 00:21:35.776 sys 0m1.532s 00:21:35.776 06:16:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:35.776 06:16:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.776 ************************************ 00:21:35.776 END TEST nvmf_identify_kernel_target 00:21:35.776 ************************************ 00:21:35.776 06:16:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:21:35.776 06:16:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:35.776 06:16:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:35.776 06:16:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.776 ************************************ 00:21:35.776 START TEST nvmf_auth_host 00:21:35.776 ************************************ 00:21:35.776 06:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:21:36.036 * Looking for test storage... 00:21:36.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:36.036 06:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:36.036 06:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:36.036 06:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:36.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.036 --rc genhtml_branch_coverage=1 00:21:36.036 --rc genhtml_function_coverage=1 00:21:36.036 --rc genhtml_legend=1 00:21:36.036 --rc geninfo_all_blocks=1 00:21:36.036 --rc geninfo_unexecuted_blocks=1 00:21:36.036 00:21:36.036 ' 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:36.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.036 --rc genhtml_branch_coverage=1 00:21:36.036 --rc genhtml_function_coverage=1 00:21:36.036 --rc genhtml_legend=1 00:21:36.036 --rc geninfo_all_blocks=1 00:21:36.036 --rc geninfo_unexecuted_blocks=1 00:21:36.036 00:21:36.036 ' 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:36.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.036 --rc genhtml_branch_coverage=1 00:21:36.036 --rc genhtml_function_coverage=1 00:21:36.036 --rc genhtml_legend=1 00:21:36.036 --rc geninfo_all_blocks=1 00:21:36.036 --rc geninfo_unexecuted_blocks=1 00:21:36.036 00:21:36.036 ' 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:36.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.036 --rc genhtml_branch_coverage=1 00:21:36.036 --rc genhtml_function_coverage=1 00:21:36.036 --rc genhtml_legend=1 00:21:36.036 --rc geninfo_all_blocks=1 00:21:36.036 --rc geninfo_unexecuted_blocks=1 00:21:36.036 00:21:36.036 ' 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.036 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:36.037 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:36.037 Cannot find device "nvmf_init_br" 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:36.037 Cannot find device "nvmf_init_br2" 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:36.037 Cannot find device "nvmf_tgt_br" 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:36.037 Cannot find device "nvmf_tgt_br2" 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:21:36.037 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:36.297 Cannot find device "nvmf_init_br" 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:36.297 Cannot find device "nvmf_init_br2" 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:36.297 Cannot find device "nvmf_tgt_br" 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:36.297 Cannot find device "nvmf_tgt_br2" 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:36.297 Cannot find device "nvmf_br" 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:36.297 Cannot find device "nvmf_init_if" 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:36.297 Cannot find device "nvmf_init_if2" 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:36.297 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:36.297 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:36.297 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:36.558 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:36.558 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:21:36.558 00:21:36.558 --- 10.0.0.3 ping statistics --- 00:21:36.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.558 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:36.558 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:36.558 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:21:36.558 00:21:36.558 --- 10.0.0.4 ping statistics --- 00:21:36.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.558 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:36.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:21:36.558 00:21:36.558 --- 10.0.0.1 ping statistics --- 00:21:36.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.558 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:36.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:36.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:21:36.558 00:21:36.558 --- 10.0.0.2 ping statistics --- 00:21:36.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.558 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78629 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78629 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78629 ']' 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:36.558 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.128 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.128 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:21:37.128 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:37.128 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:37.128 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.128 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.128 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:21:37.128 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:21:37.128 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:37.128 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:37.128 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:37.128 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:21:37.128 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:37.128 06:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e9b3d39e25b1eb49fbaaa60263e05f86 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.7hs 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e9b3d39e25b1eb49fbaaa60263e05f86 0 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e9b3d39e25b1eb49fbaaa60263e05f86 0 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e9b3d39e25b1eb49fbaaa60263e05f86 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.7hs 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.7hs 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.7hs 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=dc47b3e04c2314818a65be0cc9da3de394a1b0cde777e2b05fcab807537d26e7 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.BJu 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key dc47b3e04c2314818a65be0cc9da3de394a1b0cde777e2b05fcab807537d26e7 3 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 dc47b3e04c2314818a65be0cc9da3de394a1b0cde777e2b05fcab807537d26e7 3 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=dc47b3e04c2314818a65be0cc9da3de394a1b0cde777e2b05fcab807537d26e7 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.BJu 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.BJu 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.BJu 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4c861e84ae1b70a3cae9370cd84e7e91620d95e2568a9f01 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.vxA 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4c861e84ae1b70a3cae9370cd84e7e91620d95e2568a9f01 0 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4c861e84ae1b70a3cae9370cd84e7e91620d95e2568a9f01 0 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4c861e84ae1b70a3cae9370cd84e7e91620d95e2568a9f01 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.vxA 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.vxA 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.vxA 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d332181762d56b24debe488d27451ac9079131d8ffed0df7 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ZJJ 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d332181762d56b24debe488d27451ac9079131d8ffed0df7 2 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d332181762d56b24debe488d27451ac9079131d8ffed0df7 2 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d332181762d56b24debe488d27451ac9079131d8ffed0df7 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:21:37.128 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:37.388 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ZJJ 00:21:37.388 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ZJJ 00:21:37.388 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.ZJJ 00:21:37.388 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:21:37.388 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:37.388 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:37.388 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:37.388 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:21:37.388 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:37.388 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:37.388 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=95e3d9a7a321ec67a636f39638e160e2 00:21:37.388 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:37.388 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.CYJ 00:21:37.388 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 95e3d9a7a321ec67a636f39638e160e2 1 00:21:37.388 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 95e3d9a7a321ec67a636f39638e160e2 1 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=95e3d9a7a321ec67a636f39638e160e2 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.CYJ 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.CYJ 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.CYJ 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d0f70f539826af4fba6dbcbe1db7d038 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.y3T 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d0f70f539826af4fba6dbcbe1db7d038 1 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d0f70f539826af4fba6dbcbe1db7d038 1 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d0f70f539826af4fba6dbcbe1db7d038 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.y3T 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.y3T 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.y3T 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b80f273b8149e1ddb22a2600f4d61b8a70691b54249a39df 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.xWu 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b80f273b8149e1ddb22a2600f4d61b8a70691b54249a39df 2 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b80f273b8149e1ddb22a2600f4d61b8a70691b54249a39df 2 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b80f273b8149e1ddb22a2600f4d61b8a70691b54249a39df 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.xWu 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.xWu 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.xWu 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:21:37.389 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b0be7f602c66c143a2032a4a75fcf974 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.T1e 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b0be7f602c66c143a2032a4a75fcf974 0 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b0be7f602c66c143a2032a4a75fcf974 0 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b0be7f602c66c143a2032a4a75fcf974 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.T1e 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.T1e 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.T1e 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7e8acc06d0ffd070e8d50a122cde823dfe3ebd7eea020d2ea65427cb3852314d 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.aMS 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7e8acc06d0ffd070e8d50a122cde823dfe3ebd7eea020d2ea65427cb3852314d 3 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7e8acc06d0ffd070e8d50a122cde823dfe3ebd7eea020d2ea65427cb3852314d 3 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7e8acc06d0ffd070e8d50a122cde823dfe3ebd7eea020d2ea65427cb3852314d 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.aMS 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.aMS 00:21:37.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.aMS 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78629 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78629 ']' 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.649 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.908 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.908 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:21:37.908 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:37.908 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.7hs 00:21:37.908 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.BJu ]] 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.BJu 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.vxA 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.ZJJ ]] 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ZJJ 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.CYJ 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.y3T ]] 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.y3T 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.xWu 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.T1e ]] 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.T1e 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.909 06:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.aMS 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:38.168 06:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:38.427 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:38.427 Waiting for block devices as requested 00:21:38.427 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:38.686 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:39.254 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:39.255 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:39.255 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:39.255 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:21:39.255 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:39.255 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:39.255 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:39.255 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:39.255 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:39.255 No valid GPT data, bailing 00:21:39.255 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:39.255 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:39.255 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:39.255 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:39.255 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:39.255 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:39.255 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:21:39.255 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:21:39.255 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:39.255 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:39.255 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:21:39.255 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:39.255 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:39.255 No valid GPT data, bailing 00:21:39.255 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:39.514 No valid GPT data, bailing 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:39.514 No valid GPT data, bailing 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid=34bde053-797d-42f4-ad97-2a3b315837d0 -a 10.0.0.1 -t tcp -s 4420 00:21:39.514 00:21:39.514 Discovery Log Number of Records 2, Generation counter 2 00:21:39.514 =====Discovery Log Entry 0====== 00:21:39.514 trtype: tcp 00:21:39.514 adrfam: ipv4 00:21:39.514 subtype: current discovery subsystem 00:21:39.514 treq: not specified, sq flow control disable supported 00:21:39.514 portid: 1 00:21:39.514 trsvcid: 4420 00:21:39.514 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:39.514 traddr: 10.0.0.1 00:21:39.514 eflags: none 00:21:39.514 sectype: none 00:21:39.514 =====Discovery Log Entry 1====== 00:21:39.514 trtype: tcp 00:21:39.514 adrfam: ipv4 00:21:39.514 subtype: nvme subsystem 00:21:39.514 treq: not specified, sq flow control disable supported 00:21:39.514 portid: 1 00:21:39.514 trsvcid: 4420 00:21:39.514 subnqn: nqn.2024-02.io.spdk:cnode0 00:21:39.514 traddr: 10.0.0.1 00:21:39.514 eflags: none 00:21:39.514 sectype: none 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:39.514 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:39.515 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:39.515 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:39.515 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:39.515 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:39.515 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:21:39.515 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:21:39.515 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:39.515 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: ]] 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.774 nvme0n1 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.774 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.033 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: ]] 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.034 nvme0n1 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.034 06:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: ]] 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:40.034 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:40.035 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:40.035 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:40.035 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:40.035 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:40.035 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:40.035 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.035 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.035 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.294 nvme0n1 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: ]] 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.294 nvme0n1 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.294 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: ]] 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.554 nvme0n1 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:40.554 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:40.555 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:40.555 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:40.555 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:40.555 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:40.555 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:40.555 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:40.555 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:40.555 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:40.555 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:40.555 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:40.555 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.555 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.814 nvme0n1 00:21:40.814 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.814 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:40.814 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:40.814 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.814 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.814 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.814 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.814 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:40.814 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.814 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.814 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.814 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:40.814 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:40.814 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:21:40.814 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:40.814 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:40.814 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:40.814 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:40.814 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:21:40.814 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:21:40.814 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:40.814 06:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: ]] 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.073 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.333 nvme0n1 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: ]] 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.333 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.593 nvme0n1 00:21:41.593 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.593 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:41.593 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:41.593 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.593 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.593 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.593 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.593 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:41.593 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.593 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.593 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.593 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: ]] 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.594 nvme0n1 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.594 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: ]] 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:41.853 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:41.854 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:41.854 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:41.854 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.854 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.854 nvme0n1 00:21:41.854 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.854 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:41.854 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:41.854 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.854 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.854 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.113 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.113 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:42.113 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.113 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.113 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.113 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:42.113 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:21:42.113 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:42.113 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:42.113 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:42.113 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:42.113 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:21:42.113 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:42.113 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:42.113 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:42.113 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:21:42.113 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:42.113 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:21:42.114 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:42.114 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:42.114 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:42.114 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:42.114 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:42.114 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:42.114 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.114 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.114 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.114 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:42.114 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:42.114 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:42.114 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:42.114 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:42.114 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:42.114 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:42.114 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:42.114 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:42.114 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:42.114 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:42.114 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:42.114 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.114 06:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.114 nvme0n1 00:21:42.114 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.114 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:42.114 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:42.114 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.114 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.114 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.114 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.114 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:42.114 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.114 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.114 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.114 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:42.114 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:42.114 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:21:42.114 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:42.114 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:42.114 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:42.114 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:42.114 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:21:42.114 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:21:42.114 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:42.114 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:42.682 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:21:42.682 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: ]] 00:21:42.682 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:21:42.682 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:21:42.682 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:42.682 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:42.682 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:42.682 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:42.682 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:42.682 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:42.682 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.682 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.942 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.942 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:42.942 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:42.942 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:42.942 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:42.942 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:42.942 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:42.942 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:42.942 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:42.942 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:42.942 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:42.942 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:42.942 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.942 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.942 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.942 nvme0n1 00:21:42.942 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.942 06:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:42.942 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.942 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:42.942 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.942 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: ]] 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:43.201 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:43.202 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:43.202 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:43.202 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:43.202 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:43.202 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:43.202 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:43.202 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.202 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.202 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.202 nvme0n1 00:21:43.202 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.202 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:43.202 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:43.202 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.202 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.202 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: ]] 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.461 nvme0n1 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.461 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: ]] 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:43.720 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:43.721 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:43.721 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:43.721 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:43.721 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:43.721 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.721 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.721 nvme0n1 00:21:43.721 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.721 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:43.721 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.721 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.721 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:43.979 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.979 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.979 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:43.979 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.979 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.979 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.979 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:43.979 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:21:43.979 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:43.979 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:43.979 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:43.979 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:43.979 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:21:43.979 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:43.979 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:43.979 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.980 06:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.304 nvme0n1 00:21:44.304 06:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.304 06:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:44.304 06:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.304 06:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:44.304 06:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.304 06:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.304 06:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.304 06:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:44.304 06:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.304 06:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.304 06:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.304 06:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:44.304 06:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:44.304 06:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:21:44.304 06:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:44.304 06:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:44.304 06:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:44.304 06:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:44.304 06:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:21:44.304 06:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:21:44.304 06:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:44.304 06:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:46.207 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:21:46.207 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: ]] 00:21:46.207 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:21:46.207 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:21:46.207 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:46.207 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:46.207 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:46.207 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:46.207 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:46.207 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:46.207 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.207 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.207 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.207 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:46.207 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:46.207 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:46.207 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:46.207 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:46.207 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:46.207 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:46.207 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:46.207 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:46.207 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:46.207 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:46.208 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.208 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.208 06:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.208 nvme0n1 00:21:46.208 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.208 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:46.208 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:46.208 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.208 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.208 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.208 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.208 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:46.208 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.208 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: ]] 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.467 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.726 nvme0n1 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: ]] 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.726 06:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.985 nvme0n1 00:21:46.985 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.985 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:46.985 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:46.985 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.985 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.244 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.244 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.244 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:47.244 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.244 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.244 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.244 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:47.244 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:21:47.244 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:47.244 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:47.244 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:47.244 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:47.244 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:21:47.244 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:21:47.244 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:47.244 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:47.244 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: ]] 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.245 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.505 nvme0n1 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.505 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.764 nvme0n1 00:21:47.764 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: ]] 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.023 06:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.591 nvme0n1 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: ]] 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:48.591 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:48.592 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.592 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.592 06:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.160 nvme0n1 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: ]] 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.160 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.726 nvme0n1 00:21:49.726 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.726 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:49.726 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.726 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:49.726 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.726 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: ]] 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.986 06:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.554 nvme0n1 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.554 06:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.123 nvme0n1 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: ]] 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.123 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.382 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.383 nvme0n1 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: ]] 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.383 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.643 nvme0n1 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: ]] 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.643 nvme0n1 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.643 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:51.644 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:51.644 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.644 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.644 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: ]] 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.903 nvme0n1 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:21:51.903 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.904 06:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.163 nvme0n1 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: ]] 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:52.163 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:52.164 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:52.164 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:52.164 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:52.164 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:52.164 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.164 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.164 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.423 nvme0n1 00:21:52.423 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.423 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:52.423 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.423 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.423 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:52.423 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.423 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.423 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:52.423 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.423 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.423 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.423 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:52.423 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:21:52.423 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:52.423 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:52.423 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:52.423 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:52.423 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:21:52.423 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:21:52.423 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:52.423 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: ]] 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.424 nvme0n1 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.424 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: ]] 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.683 nvme0n1 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: ]] 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:52.683 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:52.942 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.942 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.942 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.942 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.943 nvme0n1 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:52.943 06:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:52.943 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:52.943 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:52.943 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:52.943 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:52.943 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:52.943 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:52.943 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.943 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.202 nvme0n1 00:21:53.202 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.202 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:53.202 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:53.202 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.202 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.202 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.202 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.202 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:53.202 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.202 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.202 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.202 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:53.202 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:53.202 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:21:53.202 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:53.202 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:53.202 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:53.202 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:53.202 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: ]] 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.203 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.461 nvme0n1 00:21:53.461 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.461 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:53.461 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:53.461 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.461 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: ]] 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.462 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.721 nvme0n1 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: ]] 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:53.721 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:53.722 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.722 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.722 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.722 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:53.722 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:53.722 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:53.722 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:53.722 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:53.722 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:53.722 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:53.722 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:53.722 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:53.722 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:53.722 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:53.722 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.722 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.722 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.981 nvme0n1 00:21:53.981 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.981 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:53.981 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.981 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:53.981 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.981 06:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: ]] 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:53.981 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:53.982 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:53.982 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:53.982 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:53.982 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:53.982 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:53.982 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:53.982 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:53.982 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:53.982 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.982 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.240 nvme0n1 00:21:54.240 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.240 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:54.240 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.240 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:54.241 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:54.499 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.499 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.499 nvme0n1 00:21:54.499 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.499 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:54.499 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.499 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:54.499 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.499 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.499 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.499 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:54.499 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.499 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.758 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.758 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:54.758 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:54.758 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:21:54.758 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:54.758 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:54.758 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:54.758 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:54.758 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:21:54.758 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:21:54.758 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:54.758 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:54.758 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:21:54.758 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: ]] 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.759 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.018 nvme0n1 00:21:55.018 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.018 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:55.018 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:55.018 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.018 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.018 06:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: ]] 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:55.018 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:55.019 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:55.019 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:55.019 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:55.019 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.019 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.019 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.599 nvme0n1 00:21:55.599 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.599 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:55.599 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:55.600 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.600 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.600 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.600 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.600 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:55.600 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.600 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.600 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.600 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:55.600 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:21:55.600 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:55.600 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:55.600 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:55.600 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:55.600 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:21:55.600 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:21:55.600 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:55.600 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:55.600 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:21:55.600 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: ]] 00:21:55.600 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:21:55.600 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:21:55.600 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:55.600 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:55.601 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:55.601 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:55.601 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:55.601 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:55.601 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.601 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.601 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.601 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:55.601 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:55.601 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:55.601 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:55.601 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:55.601 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:55.601 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:55.601 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:55.601 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:55.601 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:55.601 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:55.601 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.601 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.601 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.871 nvme0n1 00:21:55.871 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.871 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:55.871 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:55.871 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.871 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.871 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.871 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.871 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:55.871 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.871 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.871 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.871 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:55.871 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:21:55.871 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:55.871 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:55.871 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:55.871 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:55.871 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: ]] 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.872 06:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.439 nvme0n1 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:56.439 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:56.440 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:56.440 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:56.440 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.440 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.699 nvme0n1 00:21:56.699 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.699 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:56.699 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: ]] 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.700 06:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.269 nvme0n1 00:21:57.269 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.269 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:57.269 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.269 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.269 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:57.269 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.528 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.528 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.528 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.528 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.528 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.528 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:57.528 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:21:57.528 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:57.528 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:57.528 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:57.528 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:57.528 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:21:57.528 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:21:57.528 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: ]] 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.529 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.097 nvme0n1 00:21:58.097 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.097 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:58.097 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:58.097 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.097 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.097 06:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.097 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.097 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.097 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.097 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.097 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.097 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:58.097 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:21:58.097 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:58.097 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:58.097 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:58.097 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:58.097 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:21:58.097 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:21:58.097 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:58.097 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:58.097 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:21:58.097 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: ]] 00:21:58.097 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:21:58.097 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:21:58.097 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:58.098 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:58.098 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:58.098 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:58.098 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:58.098 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:58.098 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.098 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.098 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.098 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:58.098 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:58.098 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:58.098 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:58.098 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:58.098 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:58.098 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:58.098 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:58.098 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:58.098 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:58.098 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:58.098 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.098 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.098 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.666 nvme0n1 00:21:58.666 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.666 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:58.666 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:58.666 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.666 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.666 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.666 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.666 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.666 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.666 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: ]] 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.926 06:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.495 nvme0n1 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.495 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.063 nvme0n1 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: ]] 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.063 06:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.063 nvme0n1 00:22:00.063 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.063 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.063 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.063 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.063 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.063 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.063 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.063 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.063 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.063 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.322 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.322 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.322 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:22:00.322 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.322 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:00.322 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:00.322 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:00.322 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:22:00.322 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:22:00.322 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:00.322 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:00.322 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:22:00.322 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: ]] 00:22:00.322 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:22:00.322 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:22:00.322 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.322 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.323 nvme0n1 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: ]] 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.323 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.583 nvme0n1 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: ]] 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.583 nvme0n1 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.583 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:00.843 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.844 nvme0n1 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: ]] 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.844 06:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.102 nvme0n1 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: ]] 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:01.102 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.103 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.103 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.103 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:01.103 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:01.103 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:01.103 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:01.103 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:01.103 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:01.103 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:01.103 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:01.103 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:01.103 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:01.103 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:01.103 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.103 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.103 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.368 nvme0n1 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: ]] 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:01.368 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.369 nvme0n1 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.369 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: ]] 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:01.628 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.629 nvme0n1 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.629 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.889 nvme0n1 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: ]] 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.889 06:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.148 nvme0n1 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: ]] 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.148 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.407 nvme0n1 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: ]] 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:02.407 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:02.408 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:02.408 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:02.408 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:02.408 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:02.408 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:02.408 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:02.408 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:02.408 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:02.408 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:02.408 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.408 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.408 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.667 nvme0n1 00:22:02.667 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.667 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:02.667 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:02.667 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.667 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.667 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.667 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.667 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.667 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.667 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.667 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.667 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:02.667 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:22:02.667 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:02.667 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:02.667 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:02.667 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:02.667 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:22:02.667 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: ]] 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.668 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.927 nvme0n1 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:02.927 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:02.928 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:02.928 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:02.928 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:02.928 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:02.928 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:02.928 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:02.928 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:02.928 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.928 06:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.186 nvme0n1 00:22:03.186 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.186 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:03.186 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:03.186 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.186 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.186 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.186 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.186 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:03.186 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.186 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: ]] 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.187 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.754 nvme0n1 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: ]] 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:03.754 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:03.755 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:03.755 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:03.755 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.755 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.755 06:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.032 nvme0n1 00:22:04.032 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.032 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:04.032 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.032 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:04.032 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.032 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.032 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.032 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:04.032 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.032 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: ]] 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.326 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.584 nvme0n1 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: ]] 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.584 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.585 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.585 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:04.585 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:04.585 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:04.585 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:04.585 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:04.585 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:04.585 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:04.585 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:04.585 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:04.585 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:04.585 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:04.585 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:04.585 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.585 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.843 nvme0n1 00:22:04.843 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.843 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:04.843 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.843 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:04.843 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.843 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.103 06:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.362 nvme0n1 00:22:05.362 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.362 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:05.362 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.362 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:05.362 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.362 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.362 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.362 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:05.362 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.362 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.362 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.362 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:05.362 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:05.362 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:22:05.362 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:05.362 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:05.362 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:05.362 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:05.362 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:22:05.362 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:22:05.362 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:05.362 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTliM2QzOWUyNWIxZWI0OWZiYWFhNjAyNjNlMDVmODYgDUkB: 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: ]] 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGM0N2IzZTA0YzIzMTQ4MThhNjViZTBjYzlkYTNkZTM5NGExYjBjZGU3NzdlMmIwNWZjYWI4MDc1MzdkMjZlN7OfzG0=: 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.363 06:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.299 nvme0n1 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: ]] 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:06.299 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:06.300 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:06.300 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:06.300 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:06.300 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.300 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.300 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.867 nvme0n1 00:22:06.867 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: ]] 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.868 06:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.435 nvme0n1 00:22:07.435 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.435 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:07.435 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.435 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:07.435 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.435 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.435 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.435 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:07.435 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.435 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.435 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.435 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:07.435 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjgwZjI3M2I4MTQ5ZTFkZGIyMmEyNjAwZjRkNjFiOGE3MDY5MWI1NDI0OWEzOWRmt2qpIA==: 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: ]] 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiZTdmNjAyYzY2YzE0M2EyMDMyYTRhNzVmY2Y5NzRE7vWb: 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.436 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.003 nvme0n1 00:22:08.003 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.003 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.003 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:08.003 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.003 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.003 06:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U4YWNjMDZkMGZmZDA3MGU4ZDUwYTEyMmNkZTgyM2RmZTNlYmQ3ZWVhMDIwZDJlYTY1NDI3Y2IzODUyMzE0ZNV7svE=: 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.003 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.570 nvme0n1 00:22:08.570 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.570 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.570 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.570 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:08.570 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.570 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.570 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.570 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:08.570 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.570 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.570 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.570 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:08.570 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:08.570 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:08.570 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:08.570 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:08.570 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:22:08.570 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:22:08.570 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:08.570 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:08.570 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:22:08.570 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: ]] 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.571 request: 00:22:08.571 { 00:22:08.571 "name": "nvme0", 00:22:08.571 "trtype": "tcp", 00:22:08.571 "traddr": "10.0.0.1", 00:22:08.571 "adrfam": "ipv4", 00:22:08.571 "trsvcid": "4420", 00:22:08.571 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:08.571 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:08.571 "prchk_reftag": false, 00:22:08.571 "prchk_guard": false, 00:22:08.571 "hdgst": false, 00:22:08.571 "ddgst": false, 00:22:08.571 "allow_unrecognized_csi": false, 00:22:08.571 "method": "bdev_nvme_attach_controller", 00:22:08.571 "req_id": 1 00:22:08.571 } 00:22:08.571 Got JSON-RPC error response 00:22:08.571 response: 00:22:08.571 { 00:22:08.571 "code": -5, 00:22:08.571 "message": "Input/output error" 00:22:08.571 } 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:22:08.571 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.830 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.830 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:22:08.830 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:22:08.830 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:08.830 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:08.830 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:08.830 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:08.830 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:08.830 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:08.830 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:08.830 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:08.830 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:08.830 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:08.830 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:08.830 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:08.830 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:08.830 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:08.830 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.830 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:08.830 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.830 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:08.830 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.830 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.830 request: 00:22:08.830 { 00:22:08.830 "name": "nvme0", 00:22:08.830 "trtype": "tcp", 00:22:08.831 "traddr": "10.0.0.1", 00:22:08.831 "adrfam": "ipv4", 00:22:08.831 "trsvcid": "4420", 00:22:08.831 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:08.831 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:08.831 "prchk_reftag": false, 00:22:08.831 "prchk_guard": false, 00:22:08.831 "hdgst": false, 00:22:08.831 "ddgst": false, 00:22:08.831 "dhchap_key": "key2", 00:22:08.831 "allow_unrecognized_csi": false, 00:22:08.831 "method": "bdev_nvme_attach_controller", 00:22:08.831 "req_id": 1 00:22:08.831 } 00:22:08.831 Got JSON-RPC error response 00:22:08.831 response: 00:22:08.831 { 00:22:08.831 "code": -5, 00:22:08.831 "message": "Input/output error" 00:22:08.831 } 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.831 request: 00:22:08.831 { 00:22:08.831 "name": "nvme0", 00:22:08.831 "trtype": "tcp", 00:22:08.831 "traddr": "10.0.0.1", 00:22:08.831 "adrfam": "ipv4", 00:22:08.831 "trsvcid": "4420", 00:22:08.831 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:08.831 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:08.831 "prchk_reftag": false, 00:22:08.831 "prchk_guard": false, 00:22:08.831 "hdgst": false, 00:22:08.831 "ddgst": false, 00:22:08.831 "dhchap_key": "key1", 00:22:08.831 "dhchap_ctrlr_key": "ckey2", 00:22:08.831 "allow_unrecognized_csi": false, 00:22:08.831 "method": "bdev_nvme_attach_controller", 00:22:08.831 "req_id": 1 00:22:08.831 } 00:22:08.831 Got JSON-RPC error response 00:22:08.831 response: 00:22:08.831 { 00:22:08.831 "code": -5, 00:22:08.831 "message": "Input/output error" 00:22:08.831 } 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.831 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.090 nvme0n1 00:22:09.090 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.090 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:09.090 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:09.090 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:09.090 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:09.090 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:09.090 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:22:09.090 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:22:09.090 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:09.090 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:09.090 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:22:09.090 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: ]] 00:22:09.090 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:22:09.090 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.090 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.090 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.090 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.090 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:22:09.090 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:22:09.090 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.090 06:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.090 06:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.090 06:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.090 06:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:09.090 06:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:09.090 06:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:09.090 06:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:09.090 06:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:09.090 06:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:09.091 06:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:09.091 06:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:09.091 06:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.091 06:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.091 request: 00:22:09.091 { 00:22:09.091 "name": "nvme0", 00:22:09.091 "dhchap_key": "key1", 00:22:09.091 "dhchap_ctrlr_key": "ckey2", 00:22:09.091 "method": "bdev_nvme_set_keys", 00:22:09.091 "req_id": 1 00:22:09.091 } 00:22:09.091 Got JSON-RPC error response 00:22:09.091 response: 00:22:09.091 { 00:22:09.091 "code": -13, 00:22:09.091 "message": "Permission denied" 00:22:09.091 } 00:22:09.091 06:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:09.091 06:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:09.091 06:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:09.091 06:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:09.091 06:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:09.091 06:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:22:09.091 06:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:22:09.091 06:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.091 06:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.091 06:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.091 06:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:22:09.091 06:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:22:10.467 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.467 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:22:10.467 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.467 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.467 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.467 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:22:10.467 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:10.467 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:10.467 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:10.467 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:10.467 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:10.467 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:22:10.467 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:22:10.467 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:10.467 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:10.467 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGM4NjFlODRhZTFiNzBhM2NhZTkzNzBjZDg0ZTdlOTE2MjBkOTVlMjU2OGE5ZjAxzrDSHw==: 00:22:10.467 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: ]] 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDMzMjE4MTc2MmQ1NmIyNGRlYmU0ODhkMjc0NTFhYzkwNzkxMzFkOGZmZWQwZGY3vJDroA==: 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.468 nvme0n1 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTVlM2Q5YTdhMzIxZWM2N2E2MzZmMzk2MzhlMTYwZTIn5Auj: 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: ]] 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBmNzBmNTM5ODI2YWY0ZmJhNmRiY2JlMWRiN2QwMzhyu4J4: 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.468 request: 00:22:10.468 { 00:22:10.468 "name": "nvme0", 00:22:10.468 "dhchap_key": "key2", 00:22:10.468 "dhchap_ctrlr_key": "ckey1", 00:22:10.468 "method": "bdev_nvme_set_keys", 00:22:10.468 "req_id": 1 00:22:10.468 } 00:22:10.468 Got JSON-RPC error response 00:22:10.468 response: 00:22:10.468 { 00:22:10.468 "code": -13, 00:22:10.468 "message": "Permission denied" 00:22:10.468 } 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:22:10.468 06:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:22:11.403 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:22:11.404 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.404 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.404 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:22:11.404 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.404 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:22:11.404 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:22:11.404 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:22:11.404 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:22:11.404 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:11.404 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:22:11.404 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:11.404 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:22:11.404 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:11.404 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:11.404 rmmod nvme_tcp 00:22:11.404 rmmod nvme_fabrics 00:22:11.404 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:11.662 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:22:11.662 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:22:11.662 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78629 ']' 00:22:11.662 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78629 00:22:11.662 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78629 ']' 00:22:11.662 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78629 00:22:11.662 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:22:11.662 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:11.662 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78629 00:22:11.662 killing process with pid 78629 00:22:11.662 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:11.662 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:11.662 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78629' 00:22:11.662 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78629 00:22:11.662 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78629 00:22:11.921 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:11.921 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:11.921 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:11.921 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:22:11.921 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:22:11.921 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:11.921 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:11.921 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:11.921 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:11.921 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:11.921 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:11.921 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:11.921 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:11.921 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:11.921 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:11.921 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:11.921 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:11.921 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:11.922 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:11.922 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:11.922 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:11.922 06:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:12.181 06:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:12.181 06:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.181 06:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.181 06:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.181 06:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:22:12.181 06:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:12.181 06:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:12.181 06:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:22:12.181 06:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:22:12.181 06:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:22:12.181 06:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:12.181 06:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:12.181 06:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:12.181 06:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:12.181 06:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:22:12.181 06:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:22:12.181 06:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:12.749 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:13.007 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:13.007 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:13.008 06:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.7hs /tmp/spdk.key-null.vxA /tmp/spdk.key-sha256.CYJ /tmp/spdk.key-sha384.xWu /tmp/spdk.key-sha512.aMS /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:22:13.008 06:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:13.595 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:13.595 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:13.595 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:13.595 00:22:13.595 real 0m37.637s 00:22:13.595 user 0m34.160s 00:22:13.595 sys 0m4.236s 00:22:13.595 06:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:13.595 06:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.595 ************************************ 00:22:13.595 END TEST nvmf_auth_host 00:22:13.595 ************************************ 00:22:13.595 06:17:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:22:13.595 06:17:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:13.595 06:17:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:13.595 06:17:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:13.595 06:17:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.595 ************************************ 00:22:13.595 START TEST nvmf_digest 00:22:13.595 ************************************ 00:22:13.595 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:13.595 * Looking for test storage... 00:22:13.595 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:13.595 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:13.595 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:22:13.595 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:13.861 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:13.861 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:13.861 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:13.861 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:13.861 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:22:13.861 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:22:13.861 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:22:13.861 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:22:13.861 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:13.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.862 --rc genhtml_branch_coverage=1 00:22:13.862 --rc genhtml_function_coverage=1 00:22:13.862 --rc genhtml_legend=1 00:22:13.862 --rc geninfo_all_blocks=1 00:22:13.862 --rc geninfo_unexecuted_blocks=1 00:22:13.862 00:22:13.862 ' 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:13.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.862 --rc genhtml_branch_coverage=1 00:22:13.862 --rc genhtml_function_coverage=1 00:22:13.862 --rc genhtml_legend=1 00:22:13.862 --rc geninfo_all_blocks=1 00:22:13.862 --rc geninfo_unexecuted_blocks=1 00:22:13.862 00:22:13.862 ' 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:13.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.862 --rc genhtml_branch_coverage=1 00:22:13.862 --rc genhtml_function_coverage=1 00:22:13.862 --rc genhtml_legend=1 00:22:13.862 --rc geninfo_all_blocks=1 00:22:13.862 --rc geninfo_unexecuted_blocks=1 00:22:13.862 00:22:13.862 ' 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:13.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.862 --rc genhtml_branch_coverage=1 00:22:13.862 --rc genhtml_function_coverage=1 00:22:13.862 --rc genhtml_legend=1 00:22:13.862 --rc geninfo_all_blocks=1 00:22:13.862 --rc geninfo_unexecuted_blocks=1 00:22:13.862 00:22:13.862 ' 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:13.862 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:22:13.862 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:13.863 Cannot find device "nvmf_init_br" 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:13.863 Cannot find device "nvmf_init_br2" 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:13.863 Cannot find device "nvmf_tgt_br" 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:13.863 Cannot find device "nvmf_tgt_br2" 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:13.863 Cannot find device "nvmf_init_br" 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:13.863 Cannot find device "nvmf_init_br2" 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:13.863 Cannot find device "nvmf_tgt_br" 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:13.863 Cannot find device "nvmf_tgt_br2" 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:13.863 Cannot find device "nvmf_br" 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:13.863 Cannot find device "nvmf_init_if" 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:13.863 Cannot find device "nvmf_init_if2" 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:13.863 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:13.863 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:13.863 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:14.122 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:14.122 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:14.122 06:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:14.122 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:14.122 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:14.122 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:14.122 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:14.122 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:14.122 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:14.122 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:14.122 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:14.122 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:14.122 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:14.122 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:14.122 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:14.122 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:14.122 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:14.122 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:14.122 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:14.122 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:14.122 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:14.122 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:14.122 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:14.122 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:14.123 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:14.123 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:14.123 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:14.123 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:14.123 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:14.123 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:22:14.123 00:22:14.123 --- 10.0.0.3 ping statistics --- 00:22:14.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.123 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:22:14.123 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:14.123 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:14.123 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:22:14.123 00:22:14.123 --- 10.0.0.4 ping statistics --- 00:22:14.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.123 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:22:14.123 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:14.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:22:14.123 00:22:14.123 --- 10.0.0.1 ping statistics --- 00:22:14.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.123 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:22:14.123 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:14.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:22:14.123 00:22:14.123 --- 10.0.0.2 ping statistics --- 00:22:14.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.123 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:22:14.123 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.123 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:22:14.123 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:14.123 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.123 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:14.123 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:14.123 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.123 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:14.123 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:14.123 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:14.123 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:22:14.123 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:22:14.123 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:14.123 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:14.123 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:14.381 ************************************ 00:22:14.381 START TEST nvmf_digest_clean 00:22:14.381 ************************************ 00:22:14.381 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:22:14.381 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:22:14.381 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:22:14.381 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:22:14.381 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:22:14.381 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:22:14.381 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:14.381 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:14.381 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:14.381 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=80281 00:22:14.381 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 80281 00:22:14.381 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:14.381 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80281 ']' 00:22:14.381 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.381 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:14.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.381 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.381 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:14.381 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:14.381 [2024-11-27 06:17:19.297225] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:22:14.381 [2024-11-27 06:17:19.297327] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.381 [2024-11-27 06:17:19.454516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.640 [2024-11-27 06:17:19.519520] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.640 [2024-11-27 06:17:19.519601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.640 [2024-11-27 06:17:19.519616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.640 [2024-11-27 06:17:19.519627] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.640 [2024-11-27 06:17:19.519637] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.640 [2024-11-27 06:17:19.520153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.640 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.640 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:22:14.640 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:14.640 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:14.640 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:14.640 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.640 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:22:14.640 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:22:14.640 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:22:14.640 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.640 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:14.640 [2024-11-27 06:17:19.701973] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:14.899 null0 00:22:14.899 [2024-11-27 06:17:19.771791] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.899 [2024-11-27 06:17:19.795970] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:14.899 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.899 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:22:14.899 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:14.899 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:14.899 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:22:14.899 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:22:14.899 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:22:14.899 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:14.899 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80310 00:22:14.899 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:14.899 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80310 /var/tmp/bperf.sock 00:22:14.899 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80310 ']' 00:22:14.899 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:14.899 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:14.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:14.899 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:14.899 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:14.899 06:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:14.899 [2024-11-27 06:17:19.862875] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:22:14.899 [2024-11-27 06:17:19.862983] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80310 ] 00:22:15.157 [2024-11-27 06:17:20.018011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.157 [2024-11-27 06:17:20.091361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.157 06:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:15.157 06:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:22:15.157 06:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:15.157 06:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:15.157 06:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:15.416 [2024-11-27 06:17:20.499720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:15.675 06:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:15.675 06:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:15.934 nvme0n1 00:22:15.934 06:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:15.934 06:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:16.193 Running I/O for 2 seconds... 00:22:18.067 15875.00 IOPS, 62.01 MiB/s [2024-11-27T06:17:23.164Z] 16192.50 IOPS, 63.25 MiB/s 00:22:18.067 Latency(us) 00:22:18.067 [2024-11-27T06:17:23.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.067 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:18.067 nvme0n1 : 2.01 16191.39 63.25 0.00 0.00 7899.63 7000.44 20137.43 00:22:18.067 [2024-11-27T06:17:23.164Z] =================================================================================================================== 00:22:18.067 [2024-11-27T06:17:23.164Z] Total : 16191.39 63.25 0.00 0.00 7899.63 7000.44 20137.43 00:22:18.067 { 00:22:18.067 "results": [ 00:22:18.067 { 00:22:18.067 "job": "nvme0n1", 00:22:18.067 "core_mask": "0x2", 00:22:18.067 "workload": "randread", 00:22:18.067 "status": "finished", 00:22:18.067 "queue_depth": 128, 00:22:18.067 "io_size": 4096, 00:22:18.067 "runtime": 2.008042, 00:22:18.067 "iops": 16191.394403105114, 00:22:18.067 "mibps": 63.24763438712935, 00:22:18.067 "io_failed": 0, 00:22:18.067 "io_timeout": 0, 00:22:18.067 "avg_latency_us": 7899.62886386704, 00:22:18.067 "min_latency_us": 7000.436363636363, 00:22:18.067 "max_latency_us": 20137.425454545453 00:22:18.067 } 00:22:18.067 ], 00:22:18.067 "core_count": 1 00:22:18.067 } 00:22:18.067 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:18.067 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:18.067 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:18.067 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:18.067 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:18.067 | select(.opcode=="crc32c") 00:22:18.067 | "\(.module_name) \(.executed)"' 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80310 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80310 ']' 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80310 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80310 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:18.636 killing process with pid 80310 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80310' 00:22:18.636 Received shutdown signal, test time was about 2.000000 seconds 00:22:18.636 00:22:18.636 Latency(us) 00:22:18.636 [2024-11-27T06:17:23.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.636 [2024-11-27T06:17:23.733Z] =================================================================================================================== 00:22:18.636 [2024-11-27T06:17:23.733Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80310 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80310 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80358 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80358 /var/tmp/bperf.sock 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80358 ']' 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:18.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:18.636 06:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:18.636 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:18.636 Zero copy mechanism will not be used. 00:22:18.636 [2024-11-27 06:17:23.711552] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:22:18.636 [2024-11-27 06:17:23.711658] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80358 ] 00:22:18.895 [2024-11-27 06:17:23.859883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.895 [2024-11-27 06:17:23.921862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.831 06:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:19.831 06:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:22:19.831 06:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:19.831 06:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:19.831 06:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:20.090 [2024-11-27 06:17:25.049887] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:20.090 06:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:20.090 06:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:20.348 nvme0n1 00:22:20.348 06:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:20.348 06:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:20.608 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:20.608 Zero copy mechanism will not be used. 00:22:20.608 Running I/O for 2 seconds... 00:22:22.484 6720.00 IOPS, 840.00 MiB/s [2024-11-27T06:17:27.581Z] 6968.00 IOPS, 871.00 MiB/s 00:22:22.484 Latency(us) 00:22:22.484 [2024-11-27T06:17:27.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.484 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:22.484 nvme0n1 : 2.00 6966.07 870.76 0.00 0.00 2293.50 1951.19 10009.13 00:22:22.484 [2024-11-27T06:17:27.581Z] =================================================================================================================== 00:22:22.484 [2024-11-27T06:17:27.581Z] Total : 6966.07 870.76 0.00 0.00 2293.50 1951.19 10009.13 00:22:22.484 { 00:22:22.484 "results": [ 00:22:22.484 { 00:22:22.484 "job": "nvme0n1", 00:22:22.484 "core_mask": "0x2", 00:22:22.484 "workload": "randread", 00:22:22.484 "status": "finished", 00:22:22.484 "queue_depth": 16, 00:22:22.484 "io_size": 131072, 00:22:22.484 "runtime": 2.00285, 00:22:22.484 "iops": 6966.073345482687, 00:22:22.484 "mibps": 870.7591681853359, 00:22:22.484 "io_failed": 0, 00:22:22.484 "io_timeout": 0, 00:22:22.484 "avg_latency_us": 2293.504854045037, 00:22:22.484 "min_latency_us": 1951.1854545454546, 00:22:22.484 "max_latency_us": 10009.134545454546 00:22:22.484 } 00:22:22.484 ], 00:22:22.484 "core_count": 1 00:22:22.484 } 00:22:22.484 06:17:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:22.484 06:17:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:22.484 06:17:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:22.484 06:17:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:22.484 06:17:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:22.484 | select(.opcode=="crc32c") 00:22:22.484 | "\(.module_name) \(.executed)"' 00:22:22.753 06:17:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:22.753 06:17:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:22.753 06:17:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:22.753 06:17:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:22.753 06:17:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80358 00:22:22.753 06:17:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80358 ']' 00:22:22.753 06:17:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80358 00:22:22.753 06:17:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:22:22.753 06:17:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:22.753 06:17:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80358 00:22:23.017 06:17:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:23.017 06:17:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:23.017 killing process with pid 80358 00:22:23.017 06:17:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80358' 00:22:23.017 06:17:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80358 00:22:23.017 Received shutdown signal, test time was about 2.000000 seconds 00:22:23.017 00:22:23.017 Latency(us) 00:22:23.017 [2024-11-27T06:17:28.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.017 [2024-11-27T06:17:28.114Z] =================================================================================================================== 00:22:23.017 [2024-11-27T06:17:28.114Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:23.017 06:17:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80358 00:22:23.017 06:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:22:23.017 06:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:23.017 06:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:23.017 06:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:22:23.017 06:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:22:23.017 06:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:22:23.017 06:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:23.017 06:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80423 00:22:23.017 06:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:23.017 06:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80423 /var/tmp/bperf.sock 00:22:23.017 06:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80423 ']' 00:22:23.017 06:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:23.017 06:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:23.017 06:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:23.017 06:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.017 06:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:23.276 [2024-11-27 06:17:28.125299] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:22:23.276 [2024-11-27 06:17:28.125433] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80423 ] 00:22:23.276 [2024-11-27 06:17:28.264424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.276 [2024-11-27 06:17:28.336792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.535 06:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:23.535 06:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:22:23.535 06:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:23.535 06:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:23.535 06:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:23.795 [2024-11-27 06:17:28.690545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:23.795 06:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:23.795 06:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:24.053 nvme0n1 00:22:24.053 06:17:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:24.054 06:17:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:24.312 Running I/O for 2 seconds... 00:22:26.187 18924.00 IOPS, 73.92 MiB/s [2024-11-27T06:17:31.284Z] 19114.00 IOPS, 74.66 MiB/s 00:22:26.187 Latency(us) 00:22:26.187 [2024-11-27T06:17:31.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.187 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:26.187 nvme0n1 : 2.01 19129.59 74.72 0.00 0.00 6684.82 5302.46 14477.50 00:22:26.187 [2024-11-27T06:17:31.284Z] =================================================================================================================== 00:22:26.187 [2024-11-27T06:17:31.284Z] Total : 19129.59 74.72 0.00 0.00 6684.82 5302.46 14477.50 00:22:26.187 { 00:22:26.187 "results": [ 00:22:26.187 { 00:22:26.187 "job": "nvme0n1", 00:22:26.187 "core_mask": "0x2", 00:22:26.187 "workload": "randwrite", 00:22:26.187 "status": "finished", 00:22:26.187 "queue_depth": 128, 00:22:26.187 "io_size": 4096, 00:22:26.187 "runtime": 2.005061, 00:22:26.187 "iops": 19129.592566011706, 00:22:26.187 "mibps": 74.72497096098323, 00:22:26.187 "io_failed": 0, 00:22:26.187 "io_timeout": 0, 00:22:26.187 "avg_latency_us": 6684.818475715545, 00:22:26.187 "min_latency_us": 5302.458181818181, 00:22:26.187 "max_latency_us": 14477.498181818182 00:22:26.187 } 00:22:26.187 ], 00:22:26.187 "core_count": 1 00:22:26.187 } 00:22:26.187 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:26.187 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:26.187 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:26.187 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:26.187 | select(.opcode=="crc32c") 00:22:26.187 | "\(.module_name) \(.executed)"' 00:22:26.187 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:26.446 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:26.446 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:26.446 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:26.446 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:26.446 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80423 00:22:26.446 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80423 ']' 00:22:26.446 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80423 00:22:26.446 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:22:26.446 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:26.446 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80423 00:22:26.706 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:26.706 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:26.706 killing process with pid 80423 00:22:26.706 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80423' 00:22:26.706 Received shutdown signal, test time was about 2.000000 seconds 00:22:26.706 00:22:26.706 Latency(us) 00:22:26.706 [2024-11-27T06:17:31.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.706 [2024-11-27T06:17:31.803Z] =================================================================================================================== 00:22:26.706 [2024-11-27T06:17:31.803Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:26.706 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80423 00:22:26.706 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80423 00:22:26.706 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:22:26.706 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:26.706 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:26.706 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:22:26.706 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:22:26.706 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:22:26.706 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:26.706 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80471 00:22:26.706 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80471 /var/tmp/bperf.sock 00:22:26.706 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:26.706 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80471 ']' 00:22:26.706 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:26.706 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:26.706 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:26.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:26.706 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:26.706 06:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:26.965 [2024-11-27 06:17:31.819033] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:22:26.965 [2024-11-27 06:17:31.819156] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80471 ] 00:22:26.965 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:26.965 Zero copy mechanism will not be used. 00:22:26.965 [2024-11-27 06:17:31.963663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.965 [2024-11-27 06:17:32.027517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.223 06:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:27.223 06:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:22:27.223 06:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:27.223 06:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:27.223 06:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:27.482 [2024-11-27 06:17:32.388480] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:27.482 06:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:27.482 06:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:27.742 nvme0n1 00:22:27.742 06:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:27.742 06:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:28.001 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:28.001 Zero copy mechanism will not be used. 00:22:28.001 Running I/O for 2 seconds... 00:22:29.875 5896.00 IOPS, 737.00 MiB/s [2024-11-27T06:17:34.972Z] 5782.00 IOPS, 722.75 MiB/s 00:22:29.875 Latency(us) 00:22:29.875 [2024-11-27T06:17:34.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.875 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:29.875 nvme0n1 : 2.00 5779.52 722.44 0.00 0.00 2762.75 2129.92 9353.77 00:22:29.875 [2024-11-27T06:17:34.972Z] =================================================================================================================== 00:22:29.875 [2024-11-27T06:17:34.972Z] Total : 5779.52 722.44 0.00 0.00 2762.75 2129.92 9353.77 00:22:29.875 { 00:22:29.875 "results": [ 00:22:29.875 { 00:22:29.875 "job": "nvme0n1", 00:22:29.875 "core_mask": "0x2", 00:22:29.875 "workload": "randwrite", 00:22:29.875 "status": "finished", 00:22:29.875 "queue_depth": 16, 00:22:29.875 "io_size": 131072, 00:22:29.875 "runtime": 2.003628, 00:22:29.875 "iops": 5779.515958052093, 00:22:29.875 "mibps": 722.4394947565116, 00:22:29.875 "io_failed": 0, 00:22:29.875 "io_timeout": 0, 00:22:29.875 "avg_latency_us": 2762.7491059821004, 00:22:29.875 "min_latency_us": 2129.92, 00:22:29.875 "max_latency_us": 9353.774545454546 00:22:29.875 } 00:22:29.875 ], 00:22:29.875 "core_count": 1 00:22:29.875 } 00:22:29.875 06:17:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:29.875 06:17:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:29.875 06:17:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:29.875 06:17:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:29.875 06:17:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:29.875 | select(.opcode=="crc32c") 00:22:29.875 | "\(.module_name) \(.executed)"' 00:22:30.134 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:30.134 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:30.134 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:30.134 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:30.134 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80471 00:22:30.134 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80471 ']' 00:22:30.134 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80471 00:22:30.134 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:22:30.134 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:30.134 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80471 00:22:30.134 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:30.134 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:30.134 killing process with pid 80471 00:22:30.134 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80471' 00:22:30.134 Received shutdown signal, test time was about 2.000000 seconds 00:22:30.134 00:22:30.134 Latency(us) 00:22:30.134 [2024-11-27T06:17:35.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.134 [2024-11-27T06:17:35.231Z] =================================================================================================================== 00:22:30.134 [2024-11-27T06:17:35.231Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:30.134 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80471 00:22:30.134 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80471 00:22:30.393 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80281 00:22:30.393 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80281 ']' 00:22:30.393 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80281 00:22:30.393 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:22:30.393 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:30.393 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80281 00:22:30.393 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:30.393 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:30.393 killing process with pid 80281 00:22:30.393 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80281' 00:22:30.393 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80281 00:22:30.393 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80281 00:22:30.961 00:22:30.961 real 0m16.540s 00:22:30.961 user 0m31.212s 00:22:30.961 sys 0m5.585s 00:22:30.961 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:30.961 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:30.961 ************************************ 00:22:30.961 END TEST nvmf_digest_clean 00:22:30.961 ************************************ 00:22:30.961 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:22:30.961 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:30.961 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:30.961 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:30.961 ************************************ 00:22:30.961 START TEST nvmf_digest_error 00:22:30.961 ************************************ 00:22:30.961 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:22:30.961 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:22:30.961 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:30.961 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:30.961 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:30.961 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80548 00:22:30.961 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80548 00:22:30.962 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:30.962 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80548 ']' 00:22:30.962 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.962 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:30.962 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.962 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:30.962 06:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:30.962 [2024-11-27 06:17:35.886900] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:22:30.962 [2024-11-27 06:17:35.887006] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.962 [2024-11-27 06:17:36.034400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.221 [2024-11-27 06:17:36.120383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.221 [2024-11-27 06:17:36.120456] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.221 [2024-11-27 06:17:36.120471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.221 [2024-11-27 06:17:36.120482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.221 [2024-11-27 06:17:36.120491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.221 [2024-11-27 06:17:36.121056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.156 06:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:32.156 06:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:22:32.156 06:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:32.156 06:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:32.156 06:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:32.156 06:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.156 06:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:32.156 06:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.156 06:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:32.156 [2024-11-27 06:17:36.957955] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:32.156 06:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.156 06:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:22:32.156 06:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:22:32.156 06:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.156 06:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:32.156 [2024-11-27 06:17:37.033493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:32.156 null0 00:22:32.156 [2024-11-27 06:17:37.097010] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.156 [2024-11-27 06:17:37.122032] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:32.156 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.156 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:22:32.156 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:32.156 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:22:32.156 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:22:32.156 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:22:32.156 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80586 00:22:32.157 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:32.157 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80586 /var/tmp/bperf.sock 00:22:32.157 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80586 ']' 00:22:32.157 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:32.157 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:32.157 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:32.157 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.157 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:32.157 [2024-11-27 06:17:37.178936] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:22:32.157 [2024-11-27 06:17:37.179032] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80586 ] 00:22:32.416 [2024-11-27 06:17:37.327098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.416 [2024-11-27 06:17:37.391780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.416 [2024-11-27 06:17:37.452878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:32.727 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:32.727 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:22:32.727 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:32.727 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:32.727 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:32.727 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.727 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:32.986 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.986 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:32.986 06:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:33.245 nvme0n1 00:22:33.245 06:17:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:33.245 06:17:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.245 06:17:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:33.245 06:17:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.245 06:17:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:33.245 06:17:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:33.245 Running I/O for 2 seconds... 00:22:33.245 [2024-11-27 06:17:38.323891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.245 [2024-11-27 06:17:38.323935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.245 [2024-11-27 06:17:38.323948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.504 [2024-11-27 06:17:38.342205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.504 [2024-11-27 06:17:38.342264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.504 [2024-11-27 06:17:38.342277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.504 [2024-11-27 06:17:38.360095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.504 [2024-11-27 06:17:38.360178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.504 [2024-11-27 06:17:38.360189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.504 [2024-11-27 06:17:38.377660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.504 [2024-11-27 06:17:38.377692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.504 [2024-11-27 06:17:38.377703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.504 [2024-11-27 06:17:38.395694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.504 [2024-11-27 06:17:38.395727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.504 [2024-11-27 06:17:38.395754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.504 [2024-11-27 06:17:38.413065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.504 [2024-11-27 06:17:38.413097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.504 [2024-11-27 06:17:38.413110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.504 [2024-11-27 06:17:38.430110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.504 [2024-11-27 06:17:38.430150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.504 [2024-11-27 06:17:38.430186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.504 [2024-11-27 06:17:38.447824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.504 [2024-11-27 06:17:38.447855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.504 [2024-11-27 06:17:38.447866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.504 [2024-11-27 06:17:38.465821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.505 [2024-11-27 06:17:38.465851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.505 [2024-11-27 06:17:38.465862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.505 [2024-11-27 06:17:38.484059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.505 [2024-11-27 06:17:38.484092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.505 [2024-11-27 06:17:38.484105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.505 [2024-11-27 06:17:38.501825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.505 [2024-11-27 06:17:38.501864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.505 [2024-11-27 06:17:38.501876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.505 [2024-11-27 06:17:38.519525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.505 [2024-11-27 06:17:38.519573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.505 [2024-11-27 06:17:38.519586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.505 [2024-11-27 06:17:38.536734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.505 [2024-11-27 06:17:38.536763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.505 [2024-11-27 06:17:38.536774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.505 [2024-11-27 06:17:38.554468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.505 [2024-11-27 06:17:38.554500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.505 [2024-11-27 06:17:38.554526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.505 [2024-11-27 06:17:38.571590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.505 [2024-11-27 06:17:38.571618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.505 [2024-11-27 06:17:38.571629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.505 [2024-11-27 06:17:38.588793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.505 [2024-11-27 06:17:38.588828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.505 [2024-11-27 06:17:38.588841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.765 [2024-11-27 06:17:38.606823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.765 [2024-11-27 06:17:38.606871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.765 [2024-11-27 06:17:38.606899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.765 [2024-11-27 06:17:38.624693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.765 [2024-11-27 06:17:38.624724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.765 [2024-11-27 06:17:38.624735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.765 [2024-11-27 06:17:38.641855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.765 [2024-11-27 06:17:38.641901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.765 [2024-11-27 06:17:38.641913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.765 [2024-11-27 06:17:38.659778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.765 [2024-11-27 06:17:38.659808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.765 [2024-11-27 06:17:38.659819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.765 [2024-11-27 06:17:38.677244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.765 [2024-11-27 06:17:38.677281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.765 [2024-11-27 06:17:38.677293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.765 [2024-11-27 06:17:38.694501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.765 [2024-11-27 06:17:38.694535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.765 [2024-11-27 06:17:38.694548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.765 [2024-11-27 06:17:38.712047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.765 [2024-11-27 06:17:38.712101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.765 [2024-11-27 06:17:38.712113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.765 [2024-11-27 06:17:38.729635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.765 [2024-11-27 06:17:38.729679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.765 [2024-11-27 06:17:38.729708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.765 [2024-11-27 06:17:38.747038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.765 [2024-11-27 06:17:38.747069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.765 [2024-11-27 06:17:38.747080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.765 [2024-11-27 06:17:38.764183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.765 [2024-11-27 06:17:38.764234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.765 [2024-11-27 06:17:38.764248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.765 [2024-11-27 06:17:38.781596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.765 [2024-11-27 06:17:38.781624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.765 [2024-11-27 06:17:38.781635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.765 [2024-11-27 06:17:38.798645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.765 [2024-11-27 06:17:38.798676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.765 [2024-11-27 06:17:38.798688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.765 [2024-11-27 06:17:38.816174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.765 [2024-11-27 06:17:38.816204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.765 [2024-11-27 06:17:38.816216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.765 [2024-11-27 06:17:38.833855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.765 [2024-11-27 06:17:38.833887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.765 [2024-11-27 06:17:38.833914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.765 [2024-11-27 06:17:38.851411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:33.765 [2024-11-27 06:17:38.851444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.765 [2024-11-27 06:17:38.851458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.025 [2024-11-27 06:17:38.869654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.025 [2024-11-27 06:17:38.869687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-11-27 06:17:38.869700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.025 [2024-11-27 06:17:38.886808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.025 [2024-11-27 06:17:38.886883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-11-27 06:17:38.886896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.025 [2024-11-27 06:17:38.904528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.025 [2024-11-27 06:17:38.904564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-11-27 06:17:38.904576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.025 [2024-11-27 06:17:38.921817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.025 [2024-11-27 06:17:38.921855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-11-27 06:17:38.921868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.025 [2024-11-27 06:17:38.939367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.026 [2024-11-27 06:17:38.939402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-11-27 06:17:38.939415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.026 [2024-11-27 06:17:38.956431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.026 [2024-11-27 06:17:38.956468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-11-27 06:17:38.956481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.026 [2024-11-27 06:17:38.974289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.026 [2024-11-27 06:17:38.974327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-11-27 06:17:38.974341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.026 [2024-11-27 06:17:38.991539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.026 [2024-11-27 06:17:38.991576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-11-27 06:17:38.991589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.026 [2024-11-27 06:17:39.009068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.026 [2024-11-27 06:17:39.009101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-11-27 06:17:39.009113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.026 [2024-11-27 06:17:39.026673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.026 [2024-11-27 06:17:39.026931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-11-27 06:17:39.026946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.026 [2024-11-27 06:17:39.044431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.026 [2024-11-27 06:17:39.044637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-11-27 06:17:39.044654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.026 [2024-11-27 06:17:39.062719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.026 [2024-11-27 06:17:39.062902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-11-27 06:17:39.062923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.026 [2024-11-27 06:17:39.080406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.026 [2024-11-27 06:17:39.080469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-11-27 06:17:39.080482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.026 [2024-11-27 06:17:39.098461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.026 [2024-11-27 06:17:39.098713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-11-27 06:17:39.098735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.026 [2024-11-27 06:17:39.116967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.026 [2024-11-27 06:17:39.117016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-11-27 06:17:39.117030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.289 [2024-11-27 06:17:39.135277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.289 [2024-11-27 06:17:39.135458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-11-27 06:17:39.135477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.289 [2024-11-27 06:17:39.153258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.289 [2024-11-27 06:17:39.153292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-11-27 06:17:39.153304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.289 [2024-11-27 06:17:39.170697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.289 [2024-11-27 06:17:39.170930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-11-27 06:17:39.170947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.289 [2024-11-27 06:17:39.188786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.289 [2024-11-27 06:17:39.188820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-11-27 06:17:39.188832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.289 [2024-11-27 06:17:39.206595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.289 [2024-11-27 06:17:39.206645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-11-27 06:17:39.206659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.289 [2024-11-27 06:17:39.224568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.289 [2024-11-27 06:17:39.224601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-11-27 06:17:39.224613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.289 [2024-11-27 06:17:39.242482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.289 [2024-11-27 06:17:39.242656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-11-27 06:17:39.242674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.289 [2024-11-27 06:17:39.260173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.289 [2024-11-27 06:17:39.260216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-11-27 06:17:39.260227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.289 [2024-11-27 06:17:39.277536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.289 [2024-11-27 06:17:39.277569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-11-27 06:17:39.277581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.289 14169.00 IOPS, 55.35 MiB/s [2024-11-27T06:17:39.386Z] [2024-11-27 06:17:39.296153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.289 [2024-11-27 06:17:39.296194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-11-27 06:17:39.296208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.289 [2024-11-27 06:17:39.313941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.289 [2024-11-27 06:17:39.313976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-11-27 06:17:39.313993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.289 [2024-11-27 06:17:39.330688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.289 [2024-11-27 06:17:39.330757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-11-27 06:17:39.330771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.289 [2024-11-27 06:17:39.348414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.289 [2024-11-27 06:17:39.348446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-11-27 06:17:39.348459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.289 [2024-11-27 06:17:39.365432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.289 [2024-11-27 06:17:39.365479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-11-27 06:17:39.365491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.549 [2024-11-27 06:17:39.383174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.549 [2024-11-27 06:17:39.383364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.549 [2024-11-27 06:17:39.383381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.549 [2024-11-27 06:17:39.400991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.549 [2024-11-27 06:17:39.401028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.549 [2024-11-27 06:17:39.401042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.549 [2024-11-27 06:17:39.418579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.549 [2024-11-27 06:17:39.418804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.549 [2024-11-27 06:17:39.418836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.549 [2024-11-27 06:17:39.443951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.549 [2024-11-27 06:17:39.443985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.549 [2024-11-27 06:17:39.444002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.549 [2024-11-27 06:17:39.461379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.549 [2024-11-27 06:17:39.461428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.549 [2024-11-27 06:17:39.461466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.549 [2024-11-27 06:17:39.479206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.549 [2024-11-27 06:17:39.479430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.549 [2024-11-27 06:17:39.479446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.549 [2024-11-27 06:17:39.497495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.549 [2024-11-27 06:17:39.497535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.549 [2024-11-27 06:17:39.497549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.549 [2024-11-27 06:17:39.515789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.549 [2024-11-27 06:17:39.516012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.549 [2024-11-27 06:17:39.516033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.549 [2024-11-27 06:17:39.534227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.549 [2024-11-27 06:17:39.534266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.549 [2024-11-27 06:17:39.534280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.549 [2024-11-27 06:17:39.551470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.549 [2024-11-27 06:17:39.551505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.549 [2024-11-27 06:17:39.551523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.549 [2024-11-27 06:17:39.569014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.549 [2024-11-27 06:17:39.569048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.549 [2024-11-27 06:17:39.569066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.549 [2024-11-27 06:17:39.586209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.549 [2024-11-27 06:17:39.586259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.549 [2024-11-27 06:17:39.586289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.549 [2024-11-27 06:17:39.603845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.549 [2024-11-27 06:17:39.604079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.549 [2024-11-27 06:17:39.604095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.549 [2024-11-27 06:17:39.621718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.549 [2024-11-27 06:17:39.621763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.549 [2024-11-27 06:17:39.621784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.549 [2024-11-27 06:17:39.638954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.549 [2024-11-27 06:17:39.638989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.549 [2024-11-27 06:17:39.639001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.808 [2024-11-27 06:17:39.656667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.808 [2024-11-27 06:17:39.656701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.808 [2024-11-27 06:17:39.656713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.808 [2024-11-27 06:17:39.674011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.808 [2024-11-27 06:17:39.674044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.808 [2024-11-27 06:17:39.674056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.808 [2024-11-27 06:17:39.691291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.809 [2024-11-27 06:17:39.691333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.809 [2024-11-27 06:17:39.691345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.809 [2024-11-27 06:17:39.708158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.809 [2024-11-27 06:17:39.708207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.809 [2024-11-27 06:17:39.708222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.809 [2024-11-27 06:17:39.725486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.809 [2024-11-27 06:17:39.725530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.809 [2024-11-27 06:17:39.725542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.809 [2024-11-27 06:17:39.743053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.809 [2024-11-27 06:17:39.743091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.809 [2024-11-27 06:17:39.743104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.809 [2024-11-27 06:17:39.760595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.809 [2024-11-27 06:17:39.760661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.809 [2024-11-27 06:17:39.760679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.809 [2024-11-27 06:17:39.778362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.809 [2024-11-27 06:17:39.778398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.809 [2024-11-27 06:17:39.778412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.809 [2024-11-27 06:17:39.795649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.809 [2024-11-27 06:17:39.795873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.809 [2024-11-27 06:17:39.795890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.809 [2024-11-27 06:17:39.813147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.809 [2024-11-27 06:17:39.813358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.809 [2024-11-27 06:17:39.813376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.809 [2024-11-27 06:17:39.830551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.809 [2024-11-27 06:17:39.830763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.809 [2024-11-27 06:17:39.830885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.809 [2024-11-27 06:17:39.848127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.809 [2024-11-27 06:17:39.848360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.809 [2024-11-27 06:17:39.848498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.809 [2024-11-27 06:17:39.865968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.809 [2024-11-27 06:17:39.866166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.809 [2024-11-27 06:17:39.866328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.809 [2024-11-27 06:17:39.883622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.809 [2024-11-27 06:17:39.883829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.809 [2024-11-27 06:17:39.883983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.809 [2024-11-27 06:17:39.901740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:34.809 [2024-11-27 06:17:39.901966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.809 [2024-11-27 06:17:39.902139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.068 [2024-11-27 06:17:39.920163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:35.068 [2024-11-27 06:17:39.920352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.068 [2024-11-27 06:17:39.920477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.068 [2024-11-27 06:17:39.938426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:35.068 [2024-11-27 06:17:39.938599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.068 [2024-11-27 06:17:39.938776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.068 [2024-11-27 06:17:39.956294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:35.068 [2024-11-27 06:17:39.956492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.068 [2024-11-27 06:17:39.956619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.068 [2024-11-27 06:17:39.974147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:35.068 [2024-11-27 06:17:39.974329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.068 [2024-11-27 06:17:39.974453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.068 [2024-11-27 06:17:39.991915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:35.068 [2024-11-27 06:17:39.992124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.068 [2024-11-27 06:17:39.992265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.068 [2024-11-27 06:17:40.010317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:35.068 [2024-11-27 06:17:40.010489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.068 [2024-11-27 06:17:40.010640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.068 [2024-11-27 06:17:40.028355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:35.068 [2024-11-27 06:17:40.028390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.068 [2024-11-27 06:17:40.028414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.068 [2024-11-27 06:17:40.046034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:35.068 [2024-11-27 06:17:40.046226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.068 [2024-11-27 06:17:40.046244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.068 [2024-11-27 06:17:40.063876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:35.068 [2024-11-27 06:17:40.063919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.068 [2024-11-27 06:17:40.063933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.068 [2024-11-27 06:17:40.081451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:35.068 [2024-11-27 06:17:40.081488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.068 [2024-11-27 06:17:40.081502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.068 [2024-11-27 06:17:40.099274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:35.068 [2024-11-27 06:17:40.099341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.068 [2024-11-27 06:17:40.099355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.068 [2024-11-27 06:17:40.116482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:35.068 [2024-11-27 06:17:40.116533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.068 [2024-11-27 06:17:40.116557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.068 [2024-11-27 06:17:40.134036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:35.068 [2024-11-27 06:17:40.134074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.068 [2024-11-27 06:17:40.134096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.068 [2024-11-27 06:17:40.151900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:35.068 [2024-11-27 06:17:40.151935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.068 [2024-11-27 06:17:40.151958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.327 [2024-11-27 06:17:40.169525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:35.327 [2024-11-27 06:17:40.169562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.327 [2024-11-27 06:17:40.169576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.327 [2024-11-27 06:17:40.186844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:35.327 [2024-11-27 06:17:40.187084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.327 [2024-11-27 06:17:40.187102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.327 [2024-11-27 06:17:40.204084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:35.327 [2024-11-27 06:17:40.204121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.327 [2024-11-27 06:17:40.204183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.327 [2024-11-27 06:17:40.221188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:35.327 [2024-11-27 06:17:40.221385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.327 [2024-11-27 06:17:40.221407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.327 [2024-11-27 06:17:40.238492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:35.327 [2024-11-27 06:17:40.238543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.327 [2024-11-27 06:17:40.238558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.327 [2024-11-27 06:17:40.256006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:35.327 [2024-11-27 06:17:40.256043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.327 [2024-11-27 06:17:40.256061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.327 [2024-11-27 06:17:40.273150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:35.327 [2024-11-27 06:17:40.273195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.327 [2024-11-27 06:17:40.273209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.327 [2024-11-27 06:17:40.290444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1664fb0) 00:22:35.327 [2024-11-27 06:17:40.290624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.327 [2024-11-27 06:17:40.290642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.327 14295.00 IOPS, 55.84 MiB/s 00:22:35.327 Latency(us) 00:22:35.327 [2024-11-27T06:17:40.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.328 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:35.328 nvme0n1 : 2.01 14289.05 55.82 0.00 0.00 8951.32 8102.63 34078.72 00:22:35.328 [2024-11-27T06:17:40.425Z] =================================================================================================================== 00:22:35.328 [2024-11-27T06:17:40.425Z] Total : 14289.05 55.82 0.00 0.00 8951.32 8102.63 34078.72 00:22:35.328 { 00:22:35.328 "results": [ 00:22:35.328 { 00:22:35.328 "job": "nvme0n1", 00:22:35.328 "core_mask": "0x2", 00:22:35.328 "workload": "randread", 00:22:35.328 "status": "finished", 00:22:35.328 "queue_depth": 128, 00:22:35.328 "io_size": 4096, 00:22:35.328 "runtime": 2.009791, 00:22:35.328 "iops": 14289.047965683994, 00:22:35.328 "mibps": 55.8165936159531, 00:22:35.328 "io_failed": 0, 00:22:35.328 "io_timeout": 0, 00:22:35.328 "avg_latency_us": 8951.319964798764, 00:22:35.328 "min_latency_us": 8102.632727272728, 00:22:35.328 "max_latency_us": 34078.72 00:22:35.328 } 00:22:35.328 ], 00:22:35.328 "core_count": 1 00:22:35.328 } 00:22:35.328 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:35.328 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:35.328 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:35.328 | .driver_specific 00:22:35.328 | .nvme_error 00:22:35.328 | .status_code 00:22:35.328 | .command_transient_transport_error' 00:22:35.328 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:35.587 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 112 > 0 )) 00:22:35.587 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80586 00:22:35.587 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80586 ']' 00:22:35.587 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80586 00:22:35.587 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:22:35.587 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.587 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80586 00:22:35.846 killing process with pid 80586 00:22:35.846 Received shutdown signal, test time was about 2.000000 seconds 00:22:35.846 00:22:35.846 Latency(us) 00:22:35.846 [2024-11-27T06:17:40.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.846 [2024-11-27T06:17:40.943Z] =================================================================================================================== 00:22:35.846 [2024-11-27T06:17:40.943Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:35.846 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:35.846 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:35.846 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80586' 00:22:35.846 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80586 00:22:35.846 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80586 00:22:35.846 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:22:35.846 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:35.846 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:22:35.846 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:22:35.846 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:22:35.846 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80633 00:22:35.846 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80633 /var/tmp/bperf.sock 00:22:35.846 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:35.846 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80633 ']' 00:22:35.846 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:35.846 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.846 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:35.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:35.846 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.846 06:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:36.105 [2024-11-27 06:17:40.966622] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:22:36.105 [2024-11-27 06:17:40.966962] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80633 ] 00:22:36.105 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:36.105 Zero copy mechanism will not be used. 00:22:36.105 [2024-11-27 06:17:41.115410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.105 [2024-11-27 06:17:41.178338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.364 [2024-11-27 06:17:41.238086] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:36.364 06:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:36.364 06:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:22:36.364 06:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:36.364 06:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:36.622 06:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:36.622 06:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.622 06:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:36.622 06:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.622 06:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:36.622 06:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:36.881 nvme0n1 00:22:36.881 06:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:36.881 06:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.881 06:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:37.140 06:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.140 06:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:37.140 06:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:37.140 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:37.140 Zero copy mechanism will not be used. 00:22:37.140 Running I/O for 2 seconds... 00:22:37.140 [2024-11-27 06:17:42.125297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.140 [2024-11-27 06:17:42.125394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.140 [2024-11-27 06:17:42.125413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.140 [2024-11-27 06:17:42.130058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.140 [2024-11-27 06:17:42.130099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.140 [2024-11-27 06:17:42.130114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.140 [2024-11-27 06:17:42.134682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.140 [2024-11-27 06:17:42.134718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.141 [2024-11-27 06:17:42.134731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.141 [2024-11-27 06:17:42.139291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.141 [2024-11-27 06:17:42.139327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.141 [2024-11-27 06:17:42.139340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.141 [2024-11-27 06:17:42.143768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.141 [2024-11-27 06:17:42.143878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.141 [2024-11-27 06:17:42.143900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.141 [2024-11-27 06:17:42.148432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.141 [2024-11-27 06:17:42.148466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.141 [2024-11-27 06:17:42.148478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.141 [2024-11-27 06:17:42.152717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.141 [2024-11-27 06:17:42.152752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.141 [2024-11-27 06:17:42.152764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.141 [2024-11-27 06:17:42.157154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.141 [2024-11-27 06:17:42.157205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.141 [2024-11-27 06:17:42.157221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.141 [2024-11-27 06:17:42.161599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.141 [2024-11-27 06:17:42.161819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.141 [2024-11-27 06:17:42.161836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.141 [2024-11-27 06:17:42.166424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.141 [2024-11-27 06:17:42.166466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.141 [2024-11-27 06:17:42.166485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.141 [2024-11-27 06:17:42.170965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.141 [2024-11-27 06:17:42.171003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.141 [2024-11-27 06:17:42.171016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.141 [2024-11-27 06:17:42.175464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.141 [2024-11-27 06:17:42.175547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.141 [2024-11-27 06:17:42.175574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.141 [2024-11-27 06:17:42.179977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.141 [2024-11-27 06:17:42.180014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.141 [2024-11-27 06:17:42.180042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.141 [2024-11-27 06:17:42.184702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.141 [2024-11-27 06:17:42.184945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.141 [2024-11-27 06:17:42.184963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.141 [2024-11-27 06:17:42.189769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.141 [2024-11-27 06:17:42.189808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.141 [2024-11-27 06:17:42.189822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.141 [2024-11-27 06:17:42.194446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.141 [2024-11-27 06:17:42.194485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.141 [2024-11-27 06:17:42.194514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.141 [2024-11-27 06:17:42.198809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.141 [2024-11-27 06:17:42.198848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.141 [2024-11-27 06:17:42.198862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.141 [2024-11-27 06:17:42.203381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.141 [2024-11-27 06:17:42.203416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.141 [2024-11-27 06:17:42.203428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.141 [2024-11-27 06:17:42.207910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.141 [2024-11-27 06:17:42.207948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.141 [2024-11-27 06:17:42.207962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.141 [2024-11-27 06:17:42.212589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.141 [2024-11-27 06:17:42.212628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.141 [2024-11-27 06:17:42.212642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.141 [2024-11-27 06:17:42.217150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.141 [2024-11-27 06:17:42.217202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.141 [2024-11-27 06:17:42.217217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.141 [2024-11-27 06:17:42.221753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.141 [2024-11-27 06:17:42.221791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.141 [2024-11-27 06:17:42.221804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.141 [2024-11-27 06:17:42.226116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.141 [2024-11-27 06:17:42.226170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.141 [2024-11-27 06:17:42.226186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.141 [2024-11-27 06:17:42.230699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.141 [2024-11-27 06:17:42.230738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.141 [2024-11-27 06:17:42.230752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.141 [2024-11-27 06:17:42.235308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.141 [2024-11-27 06:17:42.235348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.141 [2024-11-27 06:17:42.235362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.402 [2024-11-27 06:17:42.239946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.402 [2024-11-27 06:17:42.239993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.402 [2024-11-27 06:17:42.240008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.402 [2024-11-27 06:17:42.244627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.402 [2024-11-27 06:17:42.244666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.402 [2024-11-27 06:17:42.244680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.402 [2024-11-27 06:17:42.249023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.402 [2024-11-27 06:17:42.249062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.402 [2024-11-27 06:17:42.249076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.402 [2024-11-27 06:17:42.253255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.402 [2024-11-27 06:17:42.253292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.402 [2024-11-27 06:17:42.253311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.402 [2024-11-27 06:17:42.257662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.402 [2024-11-27 06:17:42.257733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.402 [2024-11-27 06:17:42.257744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.402 [2024-11-27 06:17:42.262098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.402 [2024-11-27 06:17:42.262182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.402 [2024-11-27 06:17:42.262217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.402 [2024-11-27 06:17:42.266695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.402 [2024-11-27 06:17:42.266734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.402 [2024-11-27 06:17:42.266752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.271540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.271581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.271595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.276334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.276369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.276381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.280656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.280692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.280704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.285275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.285376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.285392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.290018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.290053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.290075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.294586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.294622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.294634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.298936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.298983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.298996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.303631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.303862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.303879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.308202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.308235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.308252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.312613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.312648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.312665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.317071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.317109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.317122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.321657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.321691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.321703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.326001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.326035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.326056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.330647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.330814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.330833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.335495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.335532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.335545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.340083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.340117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.340144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.344692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.344734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.344749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.349269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.349364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.349375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.353556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.353594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.353608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.358062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.358097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.358109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.362488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.362526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.362539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.366786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.366830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.366843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.371289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.371321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.371349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.375868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.375903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.375914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.380194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.380373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.380391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.384978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.385017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.385031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.389467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.403 [2024-11-27 06:17:42.389500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.403 [2024-11-27 06:17:42.389513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.403 [2024-11-27 06:17:42.393580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.404 [2024-11-27 06:17:42.393617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.404 [2024-11-27 06:17:42.393631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.404 [2024-11-27 06:17:42.397999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.404 [2024-11-27 06:17:42.398037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.404 [2024-11-27 06:17:42.398051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.404 [2024-11-27 06:17:42.402266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.404 [2024-11-27 06:17:42.402303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.404 [2024-11-27 06:17:42.402316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.404 [2024-11-27 06:17:42.406455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.404 [2024-11-27 06:17:42.406492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.404 [2024-11-27 06:17:42.406523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.404 [2024-11-27 06:17:42.410864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.404 [2024-11-27 06:17:42.410899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.404 [2024-11-27 06:17:42.410912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.404 [2024-11-27 06:17:42.415483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.404 [2024-11-27 06:17:42.415649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.404 [2024-11-27 06:17:42.415666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.404 [2024-11-27 06:17:42.420137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.404 [2024-11-27 06:17:42.420196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.404 [2024-11-27 06:17:42.420213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.404 [2024-11-27 06:17:42.424679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.404 [2024-11-27 06:17:42.424718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.404 [2024-11-27 06:17:42.424731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.404 [2024-11-27 06:17:42.429152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.404 [2024-11-27 06:17:42.429198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.404 [2024-11-27 06:17:42.429212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.404 [2024-11-27 06:17:42.433384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.404 [2024-11-27 06:17:42.433420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.404 [2024-11-27 06:17:42.433433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.404 [2024-11-27 06:17:42.437848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.404 [2024-11-27 06:17:42.437882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.404 [2024-11-27 06:17:42.437903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.404 [2024-11-27 06:17:42.442354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.404 [2024-11-27 06:17:42.442391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.404 [2024-11-27 06:17:42.442404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.404 [2024-11-27 06:17:42.446905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.404 [2024-11-27 06:17:42.446941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.404 [2024-11-27 06:17:42.446957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.404 [2024-11-27 06:17:42.451586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.404 [2024-11-27 06:17:42.451624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.404 [2024-11-27 06:17:42.451638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.404 [2024-11-27 06:17:42.456100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.404 [2024-11-27 06:17:42.456180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.404 [2024-11-27 06:17:42.456195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.404 [2024-11-27 06:17:42.460580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.404 [2024-11-27 06:17:42.460613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.404 [2024-11-27 06:17:42.460626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.404 [2024-11-27 06:17:42.464936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.404 [2024-11-27 06:17:42.464973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.404 [2024-11-27 06:17:42.464987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.404 [2024-11-27 06:17:42.469466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.404 [2024-11-27 06:17:42.469500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.404 [2024-11-27 06:17:42.469545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.404 [2024-11-27 06:17:42.473791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.404 [2024-11-27 06:17:42.473981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.404 [2024-11-27 06:17:42.473998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.404 [2024-11-27 06:17:42.478456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.404 [2024-11-27 06:17:42.478495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.404 [2024-11-27 06:17:42.478510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.404 [2024-11-27 06:17:42.482892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.404 [2024-11-27 06:17:42.482942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.404 [2024-11-27 06:17:42.482959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.404 [2024-11-27 06:17:42.487323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.404 [2024-11-27 06:17:42.487356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.404 [2024-11-27 06:17:42.487368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.404 [2024-11-27 06:17:42.491639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.404 [2024-11-27 06:17:42.491676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.404 [2024-11-27 06:17:42.491690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.404 [2024-11-27 06:17:42.496142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.404 [2024-11-27 06:17:42.496215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.404 [2024-11-27 06:17:42.496243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.665 [2024-11-27 06:17:42.500821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.665 [2024-11-27 06:17:42.500898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.665 [2024-11-27 06:17:42.500911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.665 [2024-11-27 06:17:42.505260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.665 [2024-11-27 06:17:42.505306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.665 [2024-11-27 06:17:42.505326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.665 [2024-11-27 06:17:42.509841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.665 [2024-11-27 06:17:42.509881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.665 [2024-11-27 06:17:42.509895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.665 [2024-11-27 06:17:42.514486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.665 [2024-11-27 06:17:42.514570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.665 [2024-11-27 06:17:42.514583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.665 [2024-11-27 06:17:42.518795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.665 [2024-11-27 06:17:42.518860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.665 [2024-11-27 06:17:42.518877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.665 [2024-11-27 06:17:42.523341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.665 [2024-11-27 06:17:42.523374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.665 [2024-11-27 06:17:42.523386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.665 [2024-11-27 06:17:42.527927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.665 [2024-11-27 06:17:42.527990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.665 [2024-11-27 06:17:42.528020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.665 [2024-11-27 06:17:42.532216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.665 [2024-11-27 06:17:42.532265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.665 [2024-11-27 06:17:42.532278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.665 [2024-11-27 06:17:42.536692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.665 [2024-11-27 06:17:42.536729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.665 [2024-11-27 06:17:42.536743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.665 [2024-11-27 06:17:42.541171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.665 [2024-11-27 06:17:42.541221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.665 [2024-11-27 06:17:42.541235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.665 [2024-11-27 06:17:42.545317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.665 [2024-11-27 06:17:42.545353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.665 [2024-11-27 06:17:42.545366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.665 [2024-11-27 06:17:42.549770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.665 [2024-11-27 06:17:42.549809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.665 [2024-11-27 06:17:42.549822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.665 [2024-11-27 06:17:42.554294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.665 [2024-11-27 06:17:42.554332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.665 [2024-11-27 06:17:42.554346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.665 [2024-11-27 06:17:42.558651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.665 [2024-11-27 06:17:42.558686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.665 [2024-11-27 06:17:42.558703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.665 [2024-11-27 06:17:42.563083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.665 [2024-11-27 06:17:42.563141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.665 [2024-11-27 06:17:42.563156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.665 [2024-11-27 06:17:42.567713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.665 [2024-11-27 06:17:42.567888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.665 [2024-11-27 06:17:42.567904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.665 [2024-11-27 06:17:42.572555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.665 [2024-11-27 06:17:42.572592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.665 [2024-11-27 06:17:42.572605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.665 [2024-11-27 06:17:42.577187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.665 [2024-11-27 06:17:42.577227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.665 [2024-11-27 06:17:42.577241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.665 [2024-11-27 06:17:42.582087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.665 [2024-11-27 06:17:42.582167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.665 [2024-11-27 06:17:42.582184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.666 [2024-11-27 06:17:42.586673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.666 [2024-11-27 06:17:42.586853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.666 [2024-11-27 06:17:42.586871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.666 [2024-11-27 06:17:42.591375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.666 [2024-11-27 06:17:42.591412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.666 [2024-11-27 06:17:42.591425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.666 [2024-11-27 06:17:42.595860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.666 [2024-11-27 06:17:42.595911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.666 [2024-11-27 06:17:42.595930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.666 [2024-11-27 06:17:42.600390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.666 [2024-11-27 06:17:42.600435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.666 [2024-11-27 06:17:42.600447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.666 [2024-11-27 06:17:42.604797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.666 [2024-11-27 06:17:42.604834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.666 [2024-11-27 06:17:42.604848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.666 [2024-11-27 06:17:42.609586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.666 [2024-11-27 06:17:42.609621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.666 [2024-11-27 06:17:42.609634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.666 [2024-11-27 06:17:42.614383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.666 [2024-11-27 06:17:42.614419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.666 [2024-11-27 06:17:42.614432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.666 [2024-11-27 06:17:42.618602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.666 [2024-11-27 06:17:42.618654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.666 [2024-11-27 06:17:42.618668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.666 [2024-11-27 06:17:42.622916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.666 [2024-11-27 06:17:42.622949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.666 [2024-11-27 06:17:42.622960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.666 [2024-11-27 06:17:42.627357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.666 [2024-11-27 06:17:42.627411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.666 [2024-11-27 06:17:42.627425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.666 [2024-11-27 06:17:42.631658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.666 [2024-11-27 06:17:42.631695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.666 [2024-11-27 06:17:42.631708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.666 [2024-11-27 06:17:42.636065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.666 [2024-11-27 06:17:42.636098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.666 [2024-11-27 06:17:42.636111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.666 [2024-11-27 06:17:42.640584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.666 [2024-11-27 06:17:42.640620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.666 [2024-11-27 06:17:42.640633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.666 [2024-11-27 06:17:42.644683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.666 [2024-11-27 06:17:42.644725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.666 [2024-11-27 06:17:42.644789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.666 [2024-11-27 06:17:42.649029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.666 [2024-11-27 06:17:42.649076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.666 [2024-11-27 06:17:42.649089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.666 [2024-11-27 06:17:42.653547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.666 [2024-11-27 06:17:42.653581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.666 [2024-11-27 06:17:42.653594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.666 [2024-11-27 06:17:42.657623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.666 [2024-11-27 06:17:42.657667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.666 [2024-11-27 06:17:42.657679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.666 [2024-11-27 06:17:42.661952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.666 [2024-11-27 06:17:42.661989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.666 [2024-11-27 06:17:42.662002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.666 [2024-11-27 06:17:42.666287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.666 [2024-11-27 06:17:42.666322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.666 [2024-11-27 06:17:42.666336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.666 [2024-11-27 06:17:42.670648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.666 [2024-11-27 06:17:42.670689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.666 [2024-11-27 06:17:42.670711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.666 [2024-11-27 06:17:42.674910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.666 [2024-11-27 06:17:42.674954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.667 [2024-11-27 06:17:42.674966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.667 [2024-11-27 06:17:42.679517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.667 [2024-11-27 06:17:42.679553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.667 [2024-11-27 06:17:42.679567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.667 [2024-11-27 06:17:42.683825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.667 [2024-11-27 06:17:42.683868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.667 [2024-11-27 06:17:42.683880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.667 [2024-11-27 06:17:42.688030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.667 [2024-11-27 06:17:42.688073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.667 [2024-11-27 06:17:42.688084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.667 [2024-11-27 06:17:42.692402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.667 [2024-11-27 06:17:42.692444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.667 [2024-11-27 06:17:42.692456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.667 [2024-11-27 06:17:42.696858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.667 [2024-11-27 06:17:42.696905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.667 [2024-11-27 06:17:42.696917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.667 [2024-11-27 06:17:42.701360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.667 [2024-11-27 06:17:42.701399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.667 [2024-11-27 06:17:42.701413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.667 [2024-11-27 06:17:42.705925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.667 [2024-11-27 06:17:42.705972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.667 [2024-11-27 06:17:42.705984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.667 [2024-11-27 06:17:42.710532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.667 [2024-11-27 06:17:42.710601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.667 [2024-11-27 06:17:42.710613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.667 [2024-11-27 06:17:42.714896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.667 [2024-11-27 06:17:42.714941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.667 [2024-11-27 06:17:42.714954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.667 [2024-11-27 06:17:42.719191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.667 [2024-11-27 06:17:42.719226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.667 [2024-11-27 06:17:42.719240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.667 [2024-11-27 06:17:42.723535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.667 [2024-11-27 06:17:42.723577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.667 [2024-11-27 06:17:42.723597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.667 [2024-11-27 06:17:42.727701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.667 [2024-11-27 06:17:42.727737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.667 [2024-11-27 06:17:42.727751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.667 [2024-11-27 06:17:42.731969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.667 [2024-11-27 06:17:42.732005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.667 [2024-11-27 06:17:42.732018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.667 [2024-11-27 06:17:42.736715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.667 [2024-11-27 06:17:42.736752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.667 [2024-11-27 06:17:42.736765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.667 [2024-11-27 06:17:42.741162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.667 [2024-11-27 06:17:42.741208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.667 [2024-11-27 06:17:42.741222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.667 [2024-11-27 06:17:42.745484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.667 [2024-11-27 06:17:42.745519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.667 [2024-11-27 06:17:42.745532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.667 [2024-11-27 06:17:42.750064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.667 [2024-11-27 06:17:42.750101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.667 [2024-11-27 06:17:42.750115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.667 [2024-11-27 06:17:42.754234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.667 [2024-11-27 06:17:42.754270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.667 [2024-11-27 06:17:42.754283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.667 [2024-11-27 06:17:42.758738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.667 [2024-11-27 06:17:42.758785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.667 [2024-11-27 06:17:42.758807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.928 [2024-11-27 06:17:42.763487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.928 [2024-11-27 06:17:42.763533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.928 [2024-11-27 06:17:42.763551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.928 [2024-11-27 06:17:42.767988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.928 [2024-11-27 06:17:42.768028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.928 [2024-11-27 06:17:42.768041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.928 [2024-11-27 06:17:42.772514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.928 [2024-11-27 06:17:42.772559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.928 [2024-11-27 06:17:42.772578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.928 [2024-11-27 06:17:42.777052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.928 [2024-11-27 06:17:42.777103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.928 [2024-11-27 06:17:42.777114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.928 [2024-11-27 06:17:42.781234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.928 [2024-11-27 06:17:42.781269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.928 [2024-11-27 06:17:42.781282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.928 [2024-11-27 06:17:42.785756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.928 [2024-11-27 06:17:42.785797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.928 [2024-11-27 06:17:42.785818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.928 [2024-11-27 06:17:42.790227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.928 [2024-11-27 06:17:42.790265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.928 [2024-11-27 06:17:42.790278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.928 [2024-11-27 06:17:42.794403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.928 [2024-11-27 06:17:42.794441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.928 [2024-11-27 06:17:42.794454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.928 [2024-11-27 06:17:42.798776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.928 [2024-11-27 06:17:42.798813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.928 [2024-11-27 06:17:42.798827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.928 [2024-11-27 06:17:42.803088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.928 [2024-11-27 06:17:42.803124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.928 [2024-11-27 06:17:42.803149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.928 [2024-11-27 06:17:42.807430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.928 [2024-11-27 06:17:42.807466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.928 [2024-11-27 06:17:42.807479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.928 [2024-11-27 06:17:42.811691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.928 [2024-11-27 06:17:42.811788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.928 [2024-11-27 06:17:42.811817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.928 [2024-11-27 06:17:42.816329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.928 [2024-11-27 06:17:42.816387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.928 [2024-11-27 06:17:42.816400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.928 [2024-11-27 06:17:42.820515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.928 [2024-11-27 06:17:42.820559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.928 [2024-11-27 06:17:42.820597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.928 [2024-11-27 06:17:42.824893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.928 [2024-11-27 06:17:42.824934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.928 [2024-11-27 06:17:42.824945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.928 [2024-11-27 06:17:42.829320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.928 [2024-11-27 06:17:42.829350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.928 [2024-11-27 06:17:42.829361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.928 [2024-11-27 06:17:42.833799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.928 [2024-11-27 06:17:42.833836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.928 [2024-11-27 06:17:42.833853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.928 [2024-11-27 06:17:42.837650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.928 [2024-11-27 06:17:42.837681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.928 [2024-11-27 06:17:42.837693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.928 [2024-11-27 06:17:42.841674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.928 [2024-11-27 06:17:42.841716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.928 [2024-11-27 06:17:42.841739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.928 [2024-11-27 06:17:42.845529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.928 [2024-11-27 06:17:42.845565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.928 [2024-11-27 06:17:42.845576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.928 [2024-11-27 06:17:42.849429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.928 [2024-11-27 06:17:42.849463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.928 [2024-11-27 06:17:42.849475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.928 [2024-11-27 06:17:42.853385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.928 [2024-11-27 06:17:42.853417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.928 [2024-11-27 06:17:42.853429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.928 [2024-11-27 06:17:42.857293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.928 [2024-11-27 06:17:42.857333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.928 [2024-11-27 06:17:42.857344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.928 [2024-11-27 06:17:42.861375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.928 [2024-11-27 06:17:42.861408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.928 [2024-11-27 06:17:42.861420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.865403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.865438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.865450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.869284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.869317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.869328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.873091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.873137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.873154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.876909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.876945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.876957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.880779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.880812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.880824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.884556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.884590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.884601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.888404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.888438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.888450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.892256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.892296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.892307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.896009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.896042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.896053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.899816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.899849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.899860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.903597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.903632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.903643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.907397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.907430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.907442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.911148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.911202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.911225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.914900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.914933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.914946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.918587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.918627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.918639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.922364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.922398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.922410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.926004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.926037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.926048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.929709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.929753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.929771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.933465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.933499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.933511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.937147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.937189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.937202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.940917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.940962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.940974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.945009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.945042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.945052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.949229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.949264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.949277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.953306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.953352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.953364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.957394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.957437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.957458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.961462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.961533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.961545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.929 [2024-11-27 06:17:42.965478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.929 [2024-11-27 06:17:42.965513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.929 [2024-11-27 06:17:42.965525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.930 [2024-11-27 06:17:42.969481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.930 [2024-11-27 06:17:42.969513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.930 [2024-11-27 06:17:42.969525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.930 [2024-11-27 06:17:42.973233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.930 [2024-11-27 06:17:42.973264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.930 [2024-11-27 06:17:42.973275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.930 [2024-11-27 06:17:42.977033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.930 [2024-11-27 06:17:42.977067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.930 [2024-11-27 06:17:42.977078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.930 [2024-11-27 06:17:42.980852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.930 [2024-11-27 06:17:42.980885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.930 [2024-11-27 06:17:42.980897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.930 [2024-11-27 06:17:42.984688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.930 [2024-11-27 06:17:42.984720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.930 [2024-11-27 06:17:42.984731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.930 [2024-11-27 06:17:42.988485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.930 [2024-11-27 06:17:42.988519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.930 [2024-11-27 06:17:42.988531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.930 [2024-11-27 06:17:42.992364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.930 [2024-11-27 06:17:42.992398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.930 [2024-11-27 06:17:42.992416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.930 [2024-11-27 06:17:42.996076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.930 [2024-11-27 06:17:42.996108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.930 [2024-11-27 06:17:42.996119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.930 [2024-11-27 06:17:42.999833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.930 [2024-11-27 06:17:42.999866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.930 [2024-11-27 06:17:42.999877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.930 [2024-11-27 06:17:43.003650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.930 [2024-11-27 06:17:43.003682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.930 [2024-11-27 06:17:43.003693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.930 [2024-11-27 06:17:43.007503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.930 [2024-11-27 06:17:43.007536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.930 [2024-11-27 06:17:43.007547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.930 [2024-11-27 06:17:43.011251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.930 [2024-11-27 06:17:43.011300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.930 [2024-11-27 06:17:43.011311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.930 [2024-11-27 06:17:43.014996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.930 [2024-11-27 06:17:43.015038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.930 [2024-11-27 06:17:43.015049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.930 [2024-11-27 06:17:43.018922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:37.930 [2024-11-27 06:17:43.018959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.930 [2024-11-27 06:17:43.018971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.191 [2024-11-27 06:17:43.023133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.191 [2024-11-27 06:17:43.023193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.191 [2024-11-27 06:17:43.023216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.191 [2024-11-27 06:17:43.027018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.191 [2024-11-27 06:17:43.027052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.191 [2024-11-27 06:17:43.027064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.191 [2024-11-27 06:17:43.031092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.191 [2024-11-27 06:17:43.031137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.191 [2024-11-27 06:17:43.031151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.191 [2024-11-27 06:17:43.034885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.191 [2024-11-27 06:17:43.034927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.191 [2024-11-27 06:17:43.034938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.191 [2024-11-27 06:17:43.038811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.191 [2024-11-27 06:17:43.038844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.191 [2024-11-27 06:17:43.038857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.191 [2024-11-27 06:17:43.042745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.191 [2024-11-27 06:17:43.042793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.191 [2024-11-27 06:17:43.042806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.191 [2024-11-27 06:17:43.046438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.191 [2024-11-27 06:17:43.046508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.191 [2024-11-27 06:17:43.046520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.191 [2024-11-27 06:17:43.050273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.191 [2024-11-27 06:17:43.050308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.191 [2024-11-27 06:17:43.050320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.191 [2024-11-27 06:17:43.054086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.191 [2024-11-27 06:17:43.054118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.191 [2024-11-27 06:17:43.054148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.191 [2024-11-27 06:17:43.057837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.191 [2024-11-27 06:17:43.057872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.191 [2024-11-27 06:17:43.057883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.191 [2024-11-27 06:17:43.061664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.191 [2024-11-27 06:17:43.061705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.191 [2024-11-27 06:17:43.061717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.191 [2024-11-27 06:17:43.065427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.192 [2024-11-27 06:17:43.065460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.192 [2024-11-27 06:17:43.065471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.192 [2024-11-27 06:17:43.069191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.192 [2024-11-27 06:17:43.069224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.192 [2024-11-27 06:17:43.069235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.192 [2024-11-27 06:17:43.073086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.192 [2024-11-27 06:17:43.073138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.192 [2024-11-27 06:17:43.073156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.192 [2024-11-27 06:17:43.076873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.192 [2024-11-27 06:17:43.076906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.192 [2024-11-27 06:17:43.076918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.192 [2024-11-27 06:17:43.080631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.192 [2024-11-27 06:17:43.080663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.192 [2024-11-27 06:17:43.080674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.192 [2024-11-27 06:17:43.084422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.192 [2024-11-27 06:17:43.084454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.192 [2024-11-27 06:17:43.084465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.192 [2024-11-27 06:17:43.088227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.192 [2024-11-27 06:17:43.088259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.192 [2024-11-27 06:17:43.088270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.192 [2024-11-27 06:17:43.092026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.192 [2024-11-27 06:17:43.092069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.192 [2024-11-27 06:17:43.092080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.192 [2024-11-27 06:17:43.095758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.192 [2024-11-27 06:17:43.095790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.192 [2024-11-27 06:17:43.095802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.192 [2024-11-27 06:17:43.099538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.192 [2024-11-27 06:17:43.099571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.192 [2024-11-27 06:17:43.099582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.192 [2024-11-27 06:17:43.103288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.192 [2024-11-27 06:17:43.103322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.192 [2024-11-27 06:17:43.103333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.192 [2024-11-27 06:17:43.106976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.192 [2024-11-27 06:17:43.107010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.192 [2024-11-27 06:17:43.107022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.192 [2024-11-27 06:17:43.110799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.192 [2024-11-27 06:17:43.110834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.192 [2024-11-27 06:17:43.110845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.192 [2024-11-27 06:17:43.114561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.192 [2024-11-27 06:17:43.114598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.192 [2024-11-27 06:17:43.114611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.192 7176.00 IOPS, 897.00 MiB/s [2024-11-27T06:17:43.289Z] [2024-11-27 06:17:43.119900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.192 [2024-11-27 06:17:43.119933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.192 [2024-11-27 06:17:43.119944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.192 [2024-11-27 06:17:43.123709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.192 [2024-11-27 06:17:43.123743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.192 [2024-11-27 06:17:43.123754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.192 [2024-11-27 06:17:43.127489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.192 [2024-11-27 06:17:43.127522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.192 [2024-11-27 06:17:43.127533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.192 [2024-11-27 06:17:43.131330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.192 [2024-11-27 06:17:43.131363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.192 [2024-11-27 06:17:43.131375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.192 [2024-11-27 06:17:43.135120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.192 [2024-11-27 06:17:43.135178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.192 [2024-11-27 06:17:43.135191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.192 [2024-11-27 06:17:43.139167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.192 [2024-11-27 06:17:43.139212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.192 [2024-11-27 06:17:43.139225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.192 [2024-11-27 06:17:43.143578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.192 [2024-11-27 06:17:43.143614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-27 06:17:43.143627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.193 [2024-11-27 06:17:43.147566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.193 [2024-11-27 06:17:43.147600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-27 06:17:43.147612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.193 [2024-11-27 06:17:43.151579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.193 [2024-11-27 06:17:43.151613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-27 06:17:43.151624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.193 [2024-11-27 06:17:43.155968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.193 [2024-11-27 06:17:43.156005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-27 06:17:43.156017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.193 [2024-11-27 06:17:43.160245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.193 [2024-11-27 06:17:43.160324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-27 06:17:43.160350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.193 [2024-11-27 06:17:43.164742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.193 [2024-11-27 06:17:43.164777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-27 06:17:43.164789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.193 [2024-11-27 06:17:43.169018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.193 [2024-11-27 06:17:43.169052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-27 06:17:43.169063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.193 [2024-11-27 06:17:43.173119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.193 [2024-11-27 06:17:43.173162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-27 06:17:43.173175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.193 [2024-11-27 06:17:43.177404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.193 [2024-11-27 06:17:43.177437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-27 06:17:43.177448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.193 [2024-11-27 06:17:43.181464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.193 [2024-11-27 06:17:43.181498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-27 06:17:43.181521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.193 [2024-11-27 06:17:43.185601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.193 [2024-11-27 06:17:43.185633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-27 06:17:43.185644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.193 [2024-11-27 06:17:43.189809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.193 [2024-11-27 06:17:43.189859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-27 06:17:43.189872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.193 [2024-11-27 06:17:43.194034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.193 [2024-11-27 06:17:43.194068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-27 06:17:43.194080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.193 [2024-11-27 06:17:43.198524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.193 [2024-11-27 06:17:43.198560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-27 06:17:43.198572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.193 [2024-11-27 06:17:43.203288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.193 [2024-11-27 06:17:43.203325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-27 06:17:43.203338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.193 [2024-11-27 06:17:43.207829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.193 [2024-11-27 06:17:43.207869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-27 06:17:43.207883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.193 [2024-11-27 06:17:43.212617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.193 [2024-11-27 06:17:43.212653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-27 06:17:43.212666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.193 [2024-11-27 06:17:43.217109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.193 [2024-11-27 06:17:43.217172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-27 06:17:43.217186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.193 [2024-11-27 06:17:43.221714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.193 [2024-11-27 06:17:43.221759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-27 06:17:43.221771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.193 [2024-11-27 06:17:43.226027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.193 [2024-11-27 06:17:43.226062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-27 06:17:43.226075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.193 [2024-11-27 06:17:43.230596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.193 [2024-11-27 06:17:43.230630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-27 06:17:43.230656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.193 [2024-11-27 06:17:43.235137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.193 [2024-11-27 06:17:43.235179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.194 [2024-11-27 06:17:43.235193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.194 [2024-11-27 06:17:43.239493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.194 [2024-11-27 06:17:43.239542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.194 [2024-11-27 06:17:43.239553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.194 [2024-11-27 06:17:43.244171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.194 [2024-11-27 06:17:43.244216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.194 [2024-11-27 06:17:43.244230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.194 [2024-11-27 06:17:43.248804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.194 [2024-11-27 06:17:43.248854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.194 [2024-11-27 06:17:43.248866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.194 [2024-11-27 06:17:43.253343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.194 [2024-11-27 06:17:43.253374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.194 [2024-11-27 06:17:43.253385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.194 [2024-11-27 06:17:43.257820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.194 [2024-11-27 06:17:43.257853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.194 [2024-11-27 06:17:43.257866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.194 [2024-11-27 06:17:43.262217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.194 [2024-11-27 06:17:43.262257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.194 [2024-11-27 06:17:43.262276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.194 [2024-11-27 06:17:43.266660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.194 [2024-11-27 06:17:43.266708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.194 [2024-11-27 06:17:43.266720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.194 [2024-11-27 06:17:43.270847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.194 [2024-11-27 06:17:43.270882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.194 [2024-11-27 06:17:43.270894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.194 [2024-11-27 06:17:43.274892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.194 [2024-11-27 06:17:43.274925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.194 [2024-11-27 06:17:43.274937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.194 [2024-11-27 06:17:43.279128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.194 [2024-11-27 06:17:43.279173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.194 [2024-11-27 06:17:43.279186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.194 [2024-11-27 06:17:43.283477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.194 [2024-11-27 06:17:43.283515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.194 [2024-11-27 06:17:43.283528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.455 [2024-11-27 06:17:43.287747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.455 [2024-11-27 06:17:43.287782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.455 [2024-11-27 06:17:43.287795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.455 [2024-11-27 06:17:43.291718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.455 [2024-11-27 06:17:43.291775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.455 [2024-11-27 06:17:43.291788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.455 [2024-11-27 06:17:43.295852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.455 [2024-11-27 06:17:43.295886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.455 [2024-11-27 06:17:43.295914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.455 [2024-11-27 06:17:43.299638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.455 [2024-11-27 06:17:43.299671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.455 [2024-11-27 06:17:43.299683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.455 [2024-11-27 06:17:43.303449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.455 [2024-11-27 06:17:43.303482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.455 [2024-11-27 06:17:43.303494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.455 [2024-11-27 06:17:43.307220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.455 [2024-11-27 06:17:43.307252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.455 [2024-11-27 06:17:43.307264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.455 [2024-11-27 06:17:43.311143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.455 [2024-11-27 06:17:43.311188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.455 [2024-11-27 06:17:43.311200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.455 [2024-11-27 06:17:43.314988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.455 [2024-11-27 06:17:43.315022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.455 [2024-11-27 06:17:43.315048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.455 [2024-11-27 06:17:43.318797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.455 [2024-11-27 06:17:43.318828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.455 [2024-11-27 06:17:43.318840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.455 [2024-11-27 06:17:43.322565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.455 [2024-11-27 06:17:43.322598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.455 [2024-11-27 06:17:43.322609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.455 [2024-11-27 06:17:43.326320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.455 [2024-11-27 06:17:43.326353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.455 [2024-11-27 06:17:43.326364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.455 [2024-11-27 06:17:43.329931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.455 [2024-11-27 06:17:43.329964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-11-27 06:17:43.329975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.456 [2024-11-27 06:17:43.333725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.456 [2024-11-27 06:17:43.333757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-11-27 06:17:43.333768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.456 [2024-11-27 06:17:43.337541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.456 [2024-11-27 06:17:43.337573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-11-27 06:17:43.337584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.456 [2024-11-27 06:17:43.341280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.456 [2024-11-27 06:17:43.341311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-11-27 06:17:43.341322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.456 [2024-11-27 06:17:43.345013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.456 [2024-11-27 06:17:43.345045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-11-27 06:17:43.345057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.456 [2024-11-27 06:17:43.348828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.456 [2024-11-27 06:17:43.348860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-11-27 06:17:43.348871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.456 [2024-11-27 06:17:43.352685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.456 [2024-11-27 06:17:43.352716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-11-27 06:17:43.352726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.456 [2024-11-27 06:17:43.356292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.456 [2024-11-27 06:17:43.356322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-11-27 06:17:43.356332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.456 [2024-11-27 06:17:43.359882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.456 [2024-11-27 06:17:43.359915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-11-27 06:17:43.359941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.456 [2024-11-27 06:17:43.363552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.456 [2024-11-27 06:17:43.363584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-11-27 06:17:43.363594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.456 [2024-11-27 06:17:43.367222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.456 [2024-11-27 06:17:43.367261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-11-27 06:17:43.367273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.456 [2024-11-27 06:17:43.370789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.456 [2024-11-27 06:17:43.370820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-11-27 06:17:43.370831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.456 [2024-11-27 06:17:43.374394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.456 [2024-11-27 06:17:43.374426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-11-27 06:17:43.374438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.456 [2024-11-27 06:17:43.378003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.456 [2024-11-27 06:17:43.378034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-11-27 06:17:43.378045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.456 [2024-11-27 06:17:43.381599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.456 [2024-11-27 06:17:43.381629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-11-27 06:17:43.381640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.456 [2024-11-27 06:17:43.385208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.456 [2024-11-27 06:17:43.385239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-11-27 06:17:43.385249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.456 [2024-11-27 06:17:43.388639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.456 [2024-11-27 06:17:43.388671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-11-27 06:17:43.388682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.456 [2024-11-27 06:17:43.392287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.456 [2024-11-27 06:17:43.392317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-11-27 06:17:43.392329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.456 [2024-11-27 06:17:43.396161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.456 [2024-11-27 06:17:43.396207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-11-27 06:17:43.396227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.456 [2024-11-27 06:17:43.400335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.456 [2024-11-27 06:17:43.400373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-11-27 06:17:43.400386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.456 [2024-11-27 06:17:43.404112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.456 [2024-11-27 06:17:43.404154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-11-27 06:17:43.404166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.456 [2024-11-27 06:17:43.407768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.456 [2024-11-27 06:17:43.407799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-11-27 06:17:43.407810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.456 [2024-11-27 06:17:43.411454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.456 [2024-11-27 06:17:43.411486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-11-27 06:17:43.411496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.456 [2024-11-27 06:17:43.415041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.456 [2024-11-27 06:17:43.415073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-11-27 06:17:43.415084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.418572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.418605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.418616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.422248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.422280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.422291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.425785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.425816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.425827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.429384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.429414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.429424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.432951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.432983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.432993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.436492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.436523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.436534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.440000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.440032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.440043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.443652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.443684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.443694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.447632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.447664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.447675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.451313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.451344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.451356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.455019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.455068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.455079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.458747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.458779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.458791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.462756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.462792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.462804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.466889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.466931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.466950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.470737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.470771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.470783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.474723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.474756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.474768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.478503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.478552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.478565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.482398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.482432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.482445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.486299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.486334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.486345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.490040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.490083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.490096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.493862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.493897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.493908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.497590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.497634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.497655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.501361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.501417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.501437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.505209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.505243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.505254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.508965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.508997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.509008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.512834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.512873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-11-27 06:17:43.512885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.457 [2024-11-27 06:17:43.516833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.457 [2024-11-27 06:17:43.516866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-11-27 06:17:43.516877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.458 [2024-11-27 06:17:43.520589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.458 [2024-11-27 06:17:43.520621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-11-27 06:17:43.520633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.458 [2024-11-27 06:17:43.524418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.458 [2024-11-27 06:17:43.524449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-11-27 06:17:43.524460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.458 [2024-11-27 06:17:43.528202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.458 [2024-11-27 06:17:43.528244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-11-27 06:17:43.528267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.458 [2024-11-27 06:17:43.532142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.458 [2024-11-27 06:17:43.532185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-11-27 06:17:43.532196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.458 [2024-11-27 06:17:43.536014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.458 [2024-11-27 06:17:43.536058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-11-27 06:17:43.536079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.458 [2024-11-27 06:17:43.539843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.458 [2024-11-27 06:17:43.539886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-11-27 06:17:43.539906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.458 [2024-11-27 06:17:43.543648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.458 [2024-11-27 06:17:43.543714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-11-27 06:17:43.543726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.458 [2024-11-27 06:17:43.547759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.458 [2024-11-27 06:17:43.547807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-11-27 06:17:43.547819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.717 [2024-11-27 06:17:43.551883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.717 [2024-11-27 06:17:43.551927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.717 [2024-11-27 06:17:43.551938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.717 [2024-11-27 06:17:43.555736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.717 [2024-11-27 06:17:43.555782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.555805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.559640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.559686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.559698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.563484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.563517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.563529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.567238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.567271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.567282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.571049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.571083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.571095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.574888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.574941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.574961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.578789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.578834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.578847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.582609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.582643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.582655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.586366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.586401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.586413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.590080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.590122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.590152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.594403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.594439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.594452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.598572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.598607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.598620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.602901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.602983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.602996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.607533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.607569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.607582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.612193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.612246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.612261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.616740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.616773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.616785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.621247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.621285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.621297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.625978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.626015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.626028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.630735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.630769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.630780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.635108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.635183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.635196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.639379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.639412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.639423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.643304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.643337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.643349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.647235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.647268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.647280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.651100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.651146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.651158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.655084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.655158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.655173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.658984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.659027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.659039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.662917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.718 [2024-11-27 06:17:43.662959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.718 [2024-11-27 06:17:43.662971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.718 [2024-11-27 06:17:43.666860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.666892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.666904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.670834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.670867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.670878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.674798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.674843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.674863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.678753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.678798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.678821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.682553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.682599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.682612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.686560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.686594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.686610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.690546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.690583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.690596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.694501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.694560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.694576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.698470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.698525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.698543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.702505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.702552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.702565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.706914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.706951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.706964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.711187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.711232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.711245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.715188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.715235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.715248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.719233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.719280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.719292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.723385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.723437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.723449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.727813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.727848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.727860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.731703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.731749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.731761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.735541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.735585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.735597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.739403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.739436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.739447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.743302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.743335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.743354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.747061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.747104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.747116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.750974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.751007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.751034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.755046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.755081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.755101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.758983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.759025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.759037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.762867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.762900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.762911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.766745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.766779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.766790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.770692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.719 [2024-11-27 06:17:43.770729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.719 [2024-11-27 06:17:43.770740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.719 [2024-11-27 06:17:43.774711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.720 [2024-11-27 06:17:43.774746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.720 [2024-11-27 06:17:43.774757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.720 [2024-11-27 06:17:43.778498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.720 [2024-11-27 06:17:43.778531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.720 [2024-11-27 06:17:43.778544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.720 [2024-11-27 06:17:43.782430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.720 [2024-11-27 06:17:43.782465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.720 [2024-11-27 06:17:43.782478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.720 [2024-11-27 06:17:43.786300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.720 [2024-11-27 06:17:43.786333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.720 [2024-11-27 06:17:43.786351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.720 [2024-11-27 06:17:43.789964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.720 [2024-11-27 06:17:43.790006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.720 [2024-11-27 06:17:43.790018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.720 [2024-11-27 06:17:43.793637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.720 [2024-11-27 06:17:43.793669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.720 [2024-11-27 06:17:43.793680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.720 [2024-11-27 06:17:43.797493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.720 [2024-11-27 06:17:43.797527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.720 [2024-11-27 06:17:43.797540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.720 [2024-11-27 06:17:43.801374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.720 [2024-11-27 06:17:43.801406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.720 [2024-11-27 06:17:43.801418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.720 [2024-11-27 06:17:43.805298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.720 [2024-11-27 06:17:43.805341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.720 [2024-11-27 06:17:43.805352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.720 [2024-11-27 06:17:43.809342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.720 [2024-11-27 06:17:43.809396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.720 [2024-11-27 06:17:43.809418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.981 [2024-11-27 06:17:43.813695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.981 [2024-11-27 06:17:43.813747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.981 [2024-11-27 06:17:43.813759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.981 [2024-11-27 06:17:43.817623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.981 [2024-11-27 06:17:43.817659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.981 [2024-11-27 06:17:43.817670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.981 [2024-11-27 06:17:43.821561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.981 [2024-11-27 06:17:43.821596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.981 [2024-11-27 06:17:43.821607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.981 [2024-11-27 06:17:43.825388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.981 [2024-11-27 06:17:43.825426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.981 [2024-11-27 06:17:43.825443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.981 [2024-11-27 06:17:43.829237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.981 [2024-11-27 06:17:43.829281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.981 [2024-11-27 06:17:43.829293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.981 [2024-11-27 06:17:43.832989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.981 [2024-11-27 06:17:43.833055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.981 [2024-11-27 06:17:43.833075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.981 [2024-11-27 06:17:43.836880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.981 [2024-11-27 06:17:43.836913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.981 [2024-11-27 06:17:43.836925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.981 [2024-11-27 06:17:43.840646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.981 [2024-11-27 06:17:43.840688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.981 [2024-11-27 06:17:43.840700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.981 [2024-11-27 06:17:43.844413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.981 [2024-11-27 06:17:43.844445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.981 [2024-11-27 06:17:43.844457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.981 [2024-11-27 06:17:43.848232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.981 [2024-11-27 06:17:43.848264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.981 [2024-11-27 06:17:43.848275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.981 [2024-11-27 06:17:43.852034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.981 [2024-11-27 06:17:43.852077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.981 [2024-11-27 06:17:43.852088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.981 [2024-11-27 06:17:43.855768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.981 [2024-11-27 06:17:43.855801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.981 [2024-11-27 06:17:43.855812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.981 [2024-11-27 06:17:43.859447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.981 [2024-11-27 06:17:43.859481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.981 [2024-11-27 06:17:43.859493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.981 [2024-11-27 06:17:43.863141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.981 [2024-11-27 06:17:43.863190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.981 [2024-11-27 06:17:43.863217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.981 [2024-11-27 06:17:43.867540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.981 [2024-11-27 06:17:43.867581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.981 [2024-11-27 06:17:43.867596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.981 [2024-11-27 06:17:43.871860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.981 [2024-11-27 06:17:43.871898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.981 [2024-11-27 06:17:43.871911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.981 [2024-11-27 06:17:43.876270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.981 [2024-11-27 06:17:43.876308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.981 [2024-11-27 06:17:43.876321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.981 [2024-11-27 06:17:43.880449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.880483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.880495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.884718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.884751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.884763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.888911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.888952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.888963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.892847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.892881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.892892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.896959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.896991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.897002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.901067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.901099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.901111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.904966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.905007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.905018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.909438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.909471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.909484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.913559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.913600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.913612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.917409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.917452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.917463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.921305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.921346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.921357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.925058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.925091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.925102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.929393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.929425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.929436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.933264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.933297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.933309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.937167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.937207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.937218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.941014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.941070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.941083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.945572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.945620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.945632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.950011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.950043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.950054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.953795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.953838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.953859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.957739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.957770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.957781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.961590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.961621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.961649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.965582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.965614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.965625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.969547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.969610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.969634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.973771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.973813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.973841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.977988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.978028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.978042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.982619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.982680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.982692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.987106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.982 [2024-11-27 06:17:43.987150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.982 [2024-11-27 06:17:43.987169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.982 [2024-11-27 06:17:43.991041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.983 [2024-11-27 06:17:43.991082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.983 [2024-11-27 06:17:43.991093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.983 [2024-11-27 06:17:43.994856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.983 [2024-11-27 06:17:43.994898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.983 [2024-11-27 06:17:43.994909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.983 [2024-11-27 06:17:43.998857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.983 [2024-11-27 06:17:43.998900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.983 [2024-11-27 06:17:43.998911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.983 [2024-11-27 06:17:44.002708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.983 [2024-11-27 06:17:44.002773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.983 [2024-11-27 06:17:44.002784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.983 [2024-11-27 06:17:44.006939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.983 [2024-11-27 06:17:44.006981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.983 [2024-11-27 06:17:44.006993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.983 [2024-11-27 06:17:44.010857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.983 [2024-11-27 06:17:44.010900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.983 [2024-11-27 06:17:44.010927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.983 [2024-11-27 06:17:44.014718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.983 [2024-11-27 06:17:44.014750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.983 [2024-11-27 06:17:44.014762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.983 [2024-11-27 06:17:44.018821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.983 [2024-11-27 06:17:44.018853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.983 [2024-11-27 06:17:44.018864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.983 [2024-11-27 06:17:44.022658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.983 [2024-11-27 06:17:44.022707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.983 [2024-11-27 06:17:44.022726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.983 [2024-11-27 06:17:44.026752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.983 [2024-11-27 06:17:44.026785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.983 [2024-11-27 06:17:44.026797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.983 [2024-11-27 06:17:44.030953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.983 [2024-11-27 06:17:44.030991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.983 [2024-11-27 06:17:44.031005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.983 [2024-11-27 06:17:44.035267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.983 [2024-11-27 06:17:44.035298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.983 [2024-11-27 06:17:44.035309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.983 [2024-11-27 06:17:44.039247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.983 [2024-11-27 06:17:44.039278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.983 [2024-11-27 06:17:44.039289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.983 [2024-11-27 06:17:44.043045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.983 [2024-11-27 06:17:44.043077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.983 [2024-11-27 06:17:44.043089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.983 [2024-11-27 06:17:44.046876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.983 [2024-11-27 06:17:44.046919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.983 [2024-11-27 06:17:44.046931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.983 [2024-11-27 06:17:44.050749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.983 [2024-11-27 06:17:44.050792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.983 [2024-11-27 06:17:44.050810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.983 [2024-11-27 06:17:44.054776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.983 [2024-11-27 06:17:44.054808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.983 [2024-11-27 06:17:44.054820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.983 [2024-11-27 06:17:44.058718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.983 [2024-11-27 06:17:44.058768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.983 [2024-11-27 06:17:44.058814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.983 [2024-11-27 06:17:44.062611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.983 [2024-11-27 06:17:44.062653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.983 [2024-11-27 06:17:44.062665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.983 [2024-11-27 06:17:44.066924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.983 [2024-11-27 06:17:44.066972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.983 [2024-11-27 06:17:44.066983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.983 [2024-11-27 06:17:44.070991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:38.983 [2024-11-27 06:17:44.071040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.983 [2024-11-27 06:17:44.071053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:39.241 [2024-11-27 06:17:44.075799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:39.241 [2024-11-27 06:17:44.075854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.241 [2024-11-27 06:17:44.075869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:39.241 [2024-11-27 06:17:44.080153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:39.241 [2024-11-27 06:17:44.080195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.241 [2024-11-27 06:17:44.080208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:39.241 [2024-11-27 06:17:44.084487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:39.241 [2024-11-27 06:17:44.084532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.241 [2024-11-27 06:17:44.084545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:39.241 [2024-11-27 06:17:44.088642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:39.241 [2024-11-27 06:17:44.088681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.241 [2024-11-27 06:17:44.088695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:39.242 [2024-11-27 06:17:44.092990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:39.242 [2024-11-27 06:17:44.093032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.242 [2024-11-27 06:17:44.093043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:39.242 [2024-11-27 06:17:44.096909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:39.242 [2024-11-27 06:17:44.096962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.242 [2024-11-27 06:17:44.096976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:39.242 [2024-11-27 06:17:44.101025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:39.242 [2024-11-27 06:17:44.101066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.242 [2024-11-27 06:17:44.101078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:39.242 [2024-11-27 06:17:44.105002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:39.242 [2024-11-27 06:17:44.105036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.242 [2024-11-27 06:17:44.105051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:39.242 [2024-11-27 06:17:44.108949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:39.242 [2024-11-27 06:17:44.108991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.242 [2024-11-27 06:17:44.109002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:39.242 [2024-11-27 06:17:44.112928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:39.242 [2024-11-27 06:17:44.112960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.242 [2024-11-27 06:17:44.112971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:39.242 [2024-11-27 06:17:44.118254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec59b0) 00:22:39.242 [2024-11-27 06:17:44.118288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.242 [2024-11-27 06:17:44.118300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:39.242 7455.50 IOPS, 931.94 MiB/s 00:22:39.242 Latency(us) 00:22:39.242 [2024-11-27T06:17:44.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.242 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:39.242 nvme0n1 : 2.00 7455.16 931.89 0.00 0.00 2142.63 1645.85 11439.01 00:22:39.242 [2024-11-27T06:17:44.339Z] =================================================================================================================== 00:22:39.242 [2024-11-27T06:17:44.339Z] Total : 7455.16 931.89 0.00 0.00 2142.63 1645.85 11439.01 00:22:39.242 { 00:22:39.242 "results": [ 00:22:39.242 { 00:22:39.242 "job": "nvme0n1", 00:22:39.242 "core_mask": "0x2", 00:22:39.242 "workload": "randread", 00:22:39.242 "status": "finished", 00:22:39.242 "queue_depth": 16, 00:22:39.242 "io_size": 131072, 00:22:39.242 "runtime": 2.002238, 00:22:39.242 "iops": 7455.1576785576935, 00:22:39.242 "mibps": 931.8947098197117, 00:22:39.242 "io_failed": 0, 00:22:39.242 "io_timeout": 0, 00:22:39.242 "avg_latency_us": 2142.6305542732202, 00:22:39.242 "min_latency_us": 1645.8472727272726, 00:22:39.242 "max_latency_us": 11439.01090909091 00:22:39.242 } 00:22:39.242 ], 00:22:39.242 "core_count": 1 00:22:39.242 } 00:22:39.242 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:39.242 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:39.242 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:39.242 | .driver_specific 00:22:39.242 | .nvme_error 00:22:39.242 | .status_code 00:22:39.242 | .command_transient_transport_error' 00:22:39.242 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:39.501 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 482 > 0 )) 00:22:39.501 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80633 00:22:39.501 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80633 ']' 00:22:39.501 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80633 00:22:39.501 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:22:39.501 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:39.501 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80633 00:22:39.501 killing process with pid 80633 00:22:39.501 Received shutdown signal, test time was about 2.000000 seconds 00:22:39.501 00:22:39.501 Latency(us) 00:22:39.501 [2024-11-27T06:17:44.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.501 [2024-11-27T06:17:44.598Z] =================================================================================================================== 00:22:39.501 [2024-11-27T06:17:44.598Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:39.501 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:39.501 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:39.501 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80633' 00:22:39.501 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80633 00:22:39.501 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80633 00:22:39.760 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:22:39.760 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:39.760 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:22:39.760 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:22:39.760 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:22:39.760 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80687 00:22:39.760 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80687 /var/tmp/bperf.sock 00:22:39.760 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:39.760 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80687 ']' 00:22:39.760 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:39.760 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.760 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:39.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:39.760 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.760 06:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:39.760 [2024-11-27 06:17:44.748317] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:22:39.760 [2024-11-27 06:17:44.748415] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80687 ] 00:22:40.019 [2024-11-27 06:17:44.897818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.019 [2024-11-27 06:17:44.958510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.019 [2024-11-27 06:17:45.015468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:40.019 06:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.019 06:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:22:40.019 06:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:40.019 06:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:40.278 06:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:40.278 06:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.278 06:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:40.278 06:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.278 06:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:40.278 06:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:40.847 nvme0n1 00:22:40.847 06:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:40.847 06:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.847 06:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:40.847 06:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.847 06:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:40.847 06:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:40.847 Running I/O for 2 seconds... 00:22:40.847 [2024-11-27 06:17:45.811717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016efb048 00:22:40.847 [2024-11-27 06:17:45.813272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.847 [2024-11-27 06:17:45.813314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:40.847 [2024-11-27 06:17:45.827525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016efb8b8 00:22:40.847 [2024-11-27 06:17:45.829024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.847 [2024-11-27 06:17:45.829063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.847 [2024-11-27 06:17:45.842772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016efc128 00:22:40.847 [2024-11-27 06:17:45.844223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.847 [2024-11-27 06:17:45.844269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:40.847 [2024-11-27 06:17:45.858687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016efc998 00:22:40.847 [2024-11-27 06:17:45.860128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.847 [2024-11-27 06:17:45.860200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:40.847 [2024-11-27 06:17:45.874359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016efd208 00:22:40.847 [2024-11-27 06:17:45.875689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.847 [2024-11-27 06:17:45.875736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:40.847 [2024-11-27 06:17:45.889348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016efda78 00:22:40.847 [2024-11-27 06:17:45.890769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.847 [2024-11-27 06:17:45.890818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:40.847 [2024-11-27 06:17:45.904879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016efe2e8 00:22:40.847 [2024-11-27 06:17:45.906238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.847 [2024-11-27 06:17:45.906287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:40.847 [2024-11-27 06:17:45.920562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016efeb58 00:22:40.847 [2024-11-27 06:17:45.921857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.847 [2024-11-27 06:17:45.921905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:41.106 [2024-11-27 06:17:45.942539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016efef90 00:22:41.106 [2024-11-27 06:17:45.945374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.106 [2024-11-27 06:17:45.945408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:41.106 [2024-11-27 06:17:45.958912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016efeb58 00:22:41.106 [2024-11-27 06:17:45.961479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.106 [2024-11-27 06:17:45.961511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:41.106 [2024-11-27 06:17:45.974050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016efe2e8 00:22:41.106 [2024-11-27 06:17:45.976601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.106 [2024-11-27 06:17:45.976633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:41.106 [2024-11-27 06:17:45.989658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016efda78 00:22:41.106 [2024-11-27 06:17:45.992189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.106 [2024-11-27 06:17:45.992254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:41.106 [2024-11-27 06:17:46.006150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016efd208 00:22:41.106 [2024-11-27 06:17:46.008722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.106 [2024-11-27 06:17:46.008754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:41.106 [2024-11-27 06:17:46.021978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016efc998 00:22:41.106 [2024-11-27 06:17:46.024641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.106 [2024-11-27 06:17:46.024673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:41.106 [2024-11-27 06:17:46.038012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016efc128 00:22:41.106 [2024-11-27 06:17:46.040557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.106 [2024-11-27 06:17:46.040592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:41.106 [2024-11-27 06:17:46.054081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016efb8b8 00:22:41.106 [2024-11-27 06:17:46.056800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.106 [2024-11-27 06:17:46.056831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:41.106 [2024-11-27 06:17:46.070333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016efb048 00:22:41.106 [2024-11-27 06:17:46.072606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.106 [2024-11-27 06:17:46.072689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.106 [2024-11-27 06:17:46.085632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016efa7d8 00:22:41.106 [2024-11-27 06:17:46.088056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.106 [2024-11-27 06:17:46.088106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:41.106 [2024-11-27 06:17:46.101815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef9f68 00:22:41.106 [2024-11-27 06:17:46.104227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.106 [2024-11-27 06:17:46.104276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:41.106 [2024-11-27 06:17:46.118039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef96f8 00:22:41.106 [2024-11-27 06:17:46.120444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.106 [2024-11-27 06:17:46.120480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:41.106 [2024-11-27 06:17:46.134011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef8e88 00:22:41.106 [2024-11-27 06:17:46.136477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.106 [2024-11-27 06:17:46.136514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:41.106 [2024-11-27 06:17:46.149830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef8618 00:22:41.106 [2024-11-27 06:17:46.152267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.106 [2024-11-27 06:17:46.152299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:41.106 [2024-11-27 06:17:46.165331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef7da8 00:22:41.106 [2024-11-27 06:17:46.167699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.107 [2024-11-27 06:17:46.167742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:41.107 [2024-11-27 06:17:46.180904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef7538 00:22:41.107 [2024-11-27 06:17:46.183208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.107 [2024-11-27 06:17:46.183250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:41.107 [2024-11-27 06:17:46.196478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef6cc8 00:22:41.107 [2024-11-27 06:17:46.198944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.107 [2024-11-27 06:17:46.199004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:41.366 [2024-11-27 06:17:46.212865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef6458 00:22:41.366 [2024-11-27 06:17:46.215203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.366 [2024-11-27 06:17:46.215249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:41.366 [2024-11-27 06:17:46.228359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef5be8 00:22:41.366 [2024-11-27 06:17:46.230554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.366 [2024-11-27 06:17:46.230587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:41.366 [2024-11-27 06:17:46.244032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef5378 00:22:41.366 [2024-11-27 06:17:46.246252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.366 [2024-11-27 06:17:46.246283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:41.366 [2024-11-27 06:17:46.259307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef4b08 00:22:41.366 [2024-11-27 06:17:46.261461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.366 [2024-11-27 06:17:46.261505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:41.366 [2024-11-27 06:17:46.275557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef4298 00:22:41.366 [2024-11-27 06:17:46.277740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.366 [2024-11-27 06:17:46.277786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:41.366 [2024-11-27 06:17:46.292207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef3a28 00:22:41.366 [2024-11-27 06:17:46.294421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.366 [2024-11-27 06:17:46.294453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:41.366 [2024-11-27 06:17:46.308141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef31b8 00:22:41.366 [2024-11-27 06:17:46.310330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.366 [2024-11-27 06:17:46.310360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:41.366 [2024-11-27 06:17:46.323436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef2948 00:22:41.366 [2024-11-27 06:17:46.325619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.366 [2024-11-27 06:17:46.325651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.366 [2024-11-27 06:17:46.339532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef20d8 00:22:41.366 [2024-11-27 06:17:46.341608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.366 [2024-11-27 06:17:46.341653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:41.366 [2024-11-27 06:17:46.355179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef1868 00:22:41.366 [2024-11-27 06:17:46.357053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.366 [2024-11-27 06:17:46.357097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:41.367 [2024-11-27 06:17:46.370204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef0ff8 00:22:41.367 [2024-11-27 06:17:46.372330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.367 [2024-11-27 06:17:46.372374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:41.367 [2024-11-27 06:17:46.386096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef0788 00:22:41.367 [2024-11-27 06:17:46.388067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.367 [2024-11-27 06:17:46.388108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:41.367 [2024-11-27 06:17:46.401242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016eeff18 00:22:41.367 [2024-11-27 06:17:46.403260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.367 [2024-11-27 06:17:46.403339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:41.367 [2024-11-27 06:17:46.416568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016eef6a8 00:22:41.367 [2024-11-27 06:17:46.418721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.367 [2024-11-27 06:17:46.418752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:41.367 [2024-11-27 06:17:46.432780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016eeee38 00:22:41.367 [2024-11-27 06:17:46.434875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.367 [2024-11-27 06:17:46.434919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:41.367 [2024-11-27 06:17:46.447902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016eee5c8 00:22:41.367 [2024-11-27 06:17:46.449772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.367 [2024-11-27 06:17:46.449814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:41.626 [2024-11-27 06:17:46.463583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016eedd58 00:22:41.626 [2024-11-27 06:17:46.465558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.626 [2024-11-27 06:17:46.465605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:41.626 [2024-11-27 06:17:46.479743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016eed4e8 00:22:41.626 [2024-11-27 06:17:46.481631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.626 [2024-11-27 06:17:46.481689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:41.626 [2024-11-27 06:17:46.495592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016eecc78 00:22:41.626 [2024-11-27 06:17:46.497451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.626 [2024-11-27 06:17:46.497496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:41.626 [2024-11-27 06:17:46.511533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016eec408 00:22:41.626 [2024-11-27 06:17:46.513418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.626 [2024-11-27 06:17:46.513450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:41.626 [2024-11-27 06:17:46.528005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016eebb98 00:22:41.626 [2024-11-27 06:17:46.529798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.626 [2024-11-27 06:17:46.529823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:41.626 [2024-11-27 06:17:46.543663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016eeb328 00:22:41.626 [2024-11-27 06:17:46.545445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.626 [2024-11-27 06:17:46.545506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:41.626 [2024-11-27 06:17:46.560062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016eeaab8 00:22:41.626 [2024-11-27 06:17:46.562001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.626 [2024-11-27 06:17:46.562034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:41.627 [2024-11-27 06:17:46.576438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016eea248 00:22:41.627 [2024-11-27 06:17:46.578218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.627 [2024-11-27 06:17:46.578251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.627 [2024-11-27 06:17:46.592750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee99d8 00:22:41.627 [2024-11-27 06:17:46.594572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.627 [2024-11-27 06:17:46.594617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:41.627 [2024-11-27 06:17:46.609476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee9168 00:22:41.627 [2024-11-27 06:17:46.611357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.627 [2024-11-27 06:17:46.611391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:41.627 [2024-11-27 06:17:46.625762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee88f8 00:22:41.627 [2024-11-27 06:17:46.627468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.627 [2024-11-27 06:17:46.627496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:41.627 [2024-11-27 06:17:46.641932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee8088 00:22:41.627 [2024-11-27 06:17:46.643690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.627 [2024-11-27 06:17:46.643765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:41.627 [2024-11-27 06:17:46.657957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee7818 00:22:41.627 [2024-11-27 06:17:46.659670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.627 [2024-11-27 06:17:46.659716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:41.627 [2024-11-27 06:17:46.674375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee6fa8 00:22:41.627 [2024-11-27 06:17:46.676082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.627 [2024-11-27 06:17:46.676115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:41.627 [2024-11-27 06:17:46.691391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee6738 00:22:41.627 [2024-11-27 06:17:46.693122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.627 [2024-11-27 06:17:46.693179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:41.627 [2024-11-27 06:17:46.708085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee5ec8 00:22:41.627 [2024-11-27 06:17:46.709674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.627 [2024-11-27 06:17:46.709707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:41.888 [2024-11-27 06:17:46.724387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee5658 00:22:41.888 [2024-11-27 06:17:46.726093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.888 [2024-11-27 06:17:46.726135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:41.888 [2024-11-27 06:17:46.740538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee4de8 00:22:41.888 [2024-11-27 06:17:46.742089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.888 [2024-11-27 06:17:46.742138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:41.888 [2024-11-27 06:17:46.756396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee4578 00:22:41.888 [2024-11-27 06:17:46.757985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.888 [2024-11-27 06:17:46.758018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:41.888 [2024-11-27 06:17:46.771851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee3d08 00:22:41.888 [2024-11-27 06:17:46.773444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.888 [2024-11-27 06:17:46.773472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:41.888 [2024-11-27 06:17:46.787847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee3498 00:22:41.888 [2024-11-27 06:17:46.789406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.888 [2024-11-27 06:17:46.789439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:41.888 15941.00 IOPS, 62.27 MiB/s [2024-11-27T06:17:46.985Z] [2024-11-27 06:17:46.805805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee2c28 00:22:41.888 [2024-11-27 06:17:46.807349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.888 [2024-11-27 06:17:46.807380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:41.888 [2024-11-27 06:17:46.821881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee23b8 00:22:41.888 [2024-11-27 06:17:46.823508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.888 [2024-11-27 06:17:46.823552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:41.888 [2024-11-27 06:17:46.837478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee1b48 00:22:41.888 [2024-11-27 06:17:46.839054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.888 [2024-11-27 06:17:46.839099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.888 [2024-11-27 06:17:46.852882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee12d8 00:22:41.888 [2024-11-27 06:17:46.854433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.888 [2024-11-27 06:17:46.854466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:41.888 [2024-11-27 06:17:46.868710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee0a68 00:22:41.888 [2024-11-27 06:17:46.870135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.889 [2024-11-27 06:17:46.870237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:41.889 [2024-11-27 06:17:46.884025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee01f8 00:22:41.889 [2024-11-27 06:17:46.885518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.889 [2024-11-27 06:17:46.885548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:41.889 [2024-11-27 06:17:46.900318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016edf988 00:22:41.889 [2024-11-27 06:17:46.901729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.889 [2024-11-27 06:17:46.901775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:41.889 [2024-11-27 06:17:46.915993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016edf118 00:22:41.889 [2024-11-27 06:17:46.917332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.889 [2024-11-27 06:17:46.917364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:41.889 [2024-11-27 06:17:46.931358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ede8a8 00:22:41.889 [2024-11-27 06:17:46.932708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.889 [2024-11-27 06:17:46.932753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:41.889 [2024-11-27 06:17:46.946212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ede038 00:22:41.889 [2024-11-27 06:17:46.947524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.889 [2024-11-27 06:17:46.947569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:41.889 [2024-11-27 06:17:46.968456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ede038 00:22:41.889 [2024-11-27 06:17:46.971027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.889 [2024-11-27 06:17:46.971073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:42.148 [2024-11-27 06:17:46.983949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ede8a8 00:22:42.148 [2024-11-27 06:17:46.986718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.148 [2024-11-27 06:17:46.986778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:42.148 [2024-11-27 06:17:46.999784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016edf118 00:22:42.148 [2024-11-27 06:17:47.002297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.149 [2024-11-27 06:17:47.002332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:42.149 [2024-11-27 06:17:47.014973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016edf988 00:22:42.149 [2024-11-27 06:17:47.017305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.149 [2024-11-27 06:17:47.017349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:42.149 [2024-11-27 06:17:47.030061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee01f8 00:22:42.149 [2024-11-27 06:17:47.032460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.149 [2024-11-27 06:17:47.032491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:42.149 [2024-11-27 06:17:47.044863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee0a68 00:22:42.149 [2024-11-27 06:17:47.047334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.149 [2024-11-27 06:17:47.047378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:42.149 [2024-11-27 06:17:47.060620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee12d8 00:22:42.149 [2024-11-27 06:17:47.063096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.149 [2024-11-27 06:17:47.063149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:42.149 [2024-11-27 06:17:47.075792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee1b48 00:22:42.149 [2024-11-27 06:17:47.078270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.149 [2024-11-27 06:17:47.078304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.149 [2024-11-27 06:17:47.091563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee23b8 00:22:42.149 [2024-11-27 06:17:47.093950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.149 [2024-11-27 06:17:47.093982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:42.149 [2024-11-27 06:17:47.107114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee2c28 00:22:42.149 [2024-11-27 06:17:47.109592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.149 [2024-11-27 06:17:47.109637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:42.149 [2024-11-27 06:17:47.122453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee3498 00:22:42.149 [2024-11-27 06:17:47.124655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.149 [2024-11-27 06:17:47.124700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:42.149 [2024-11-27 06:17:47.137367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee3d08 00:22:42.149 [2024-11-27 06:17:47.139669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.149 [2024-11-27 06:17:47.139715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:42.149 [2024-11-27 06:17:47.153384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee4578 00:22:42.149 [2024-11-27 06:17:47.155774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.149 [2024-11-27 06:17:47.155820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:42.149 [2024-11-27 06:17:47.169841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee4de8 00:22:42.149 [2024-11-27 06:17:47.172150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.149 [2024-11-27 06:17:47.172199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:42.149 [2024-11-27 06:17:47.186377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee5658 00:22:42.149 [2024-11-27 06:17:47.188659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.149 [2024-11-27 06:17:47.188707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:42.149 [2024-11-27 06:17:47.202821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee5ec8 00:22:42.149 [2024-11-27 06:17:47.205244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.149 [2024-11-27 06:17:47.205298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:42.149 [2024-11-27 06:17:47.218399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee6738 00:22:42.149 [2024-11-27 06:17:47.220699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.149 [2024-11-27 06:17:47.220744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:42.149 [2024-11-27 06:17:47.234221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee6fa8 00:22:42.149 [2024-11-27 06:17:47.236476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.149 [2024-11-27 06:17:47.236505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:42.408 [2024-11-27 06:17:47.249917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee7818 00:22:42.408 [2024-11-27 06:17:47.252177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.408 [2024-11-27 06:17:47.252244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:42.408 [2024-11-27 06:17:47.265783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee8088 00:22:42.408 [2024-11-27 06:17:47.268111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.408 [2024-11-27 06:17:47.268149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:42.408 [2024-11-27 06:17:47.281681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee88f8 00:22:42.408 [2024-11-27 06:17:47.283849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.408 [2024-11-27 06:17:47.283893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:42.408 [2024-11-27 06:17:47.297935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee9168 00:22:42.408 [2024-11-27 06:17:47.300209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.408 [2024-11-27 06:17:47.300248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:42.408 [2024-11-27 06:17:47.314662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ee99d8 00:22:42.408 [2024-11-27 06:17:47.316836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.408 [2024-11-27 06:17:47.316880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:42.408 [2024-11-27 06:17:47.330576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016eea248 00:22:42.408 [2024-11-27 06:17:47.332695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.408 [2024-11-27 06:17:47.332726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.408 [2024-11-27 06:17:47.346338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016eeaab8 00:22:42.408 [2024-11-27 06:17:47.348382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.408 [2024-11-27 06:17:47.348427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:42.408 [2024-11-27 06:17:47.361720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016eeb328 00:22:42.408 [2024-11-27 06:17:47.363872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.408 [2024-11-27 06:17:47.363916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:42.408 [2024-11-27 06:17:47.377719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016eebb98 00:22:42.408 [2024-11-27 06:17:47.379670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.408 [2024-11-27 06:17:47.379713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:42.408 [2024-11-27 06:17:47.392876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016eec408 00:22:42.408 [2024-11-27 06:17:47.395041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.408 [2024-11-27 06:17:47.395073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:42.408 [2024-11-27 06:17:47.408244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016eecc78 00:22:42.408 [2024-11-27 06:17:47.410233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.408 [2024-11-27 06:17:47.410261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:42.408 [2024-11-27 06:17:47.423009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016eed4e8 00:22:42.409 [2024-11-27 06:17:47.424862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.409 [2024-11-27 06:17:47.424905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:42.409 [2024-11-27 06:17:47.438223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016eedd58 00:22:42.409 [2024-11-27 06:17:47.440263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.409 [2024-11-27 06:17:47.440296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:42.409 [2024-11-27 06:17:47.453481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016eee5c8 00:22:42.409 [2024-11-27 06:17:47.455517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.409 [2024-11-27 06:17:47.455546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:42.409 [2024-11-27 06:17:47.468389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016eeee38 00:22:42.409 [2024-11-27 06:17:47.470340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.409 [2024-11-27 06:17:47.470373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:42.409 [2024-11-27 06:17:47.483887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016eef6a8 00:22:42.409 [2024-11-27 06:17:47.485765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.409 [2024-11-27 06:17:47.485806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:42.409 [2024-11-27 06:17:47.499186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016eeff18 00:22:42.409 [2024-11-27 06:17:47.501099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.409 [2024-11-27 06:17:47.501152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:42.668 [2024-11-27 06:17:47.515583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef0788 00:22:42.668 [2024-11-27 06:17:47.517460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.668 [2024-11-27 06:17:47.517488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:42.668 [2024-11-27 06:17:47.531239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef0ff8 00:22:42.668 [2024-11-27 06:17:47.533154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.668 [2024-11-27 06:17:47.533224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:42.668 [2024-11-27 06:17:47.546799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef1868 00:22:42.668 [2024-11-27 06:17:47.548588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.668 [2024-11-27 06:17:47.548633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:42.668 [2024-11-27 06:17:47.562111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef20d8 00:22:42.668 [2024-11-27 06:17:47.563914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.668 [2024-11-27 06:17:47.563946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:42.668 [2024-11-27 06:17:47.577725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef2948 00:22:42.668 [2024-11-27 06:17:47.579519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.668 [2024-11-27 06:17:47.579595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.668 [2024-11-27 06:17:47.593034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef31b8 00:22:42.668 [2024-11-27 06:17:47.594905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.668 [2024-11-27 06:17:47.594949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:42.668 [2024-11-27 06:17:47.608641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef3a28 00:22:42.668 [2024-11-27 06:17:47.610406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.668 [2024-11-27 06:17:47.610436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:42.668 [2024-11-27 06:17:47.623712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef4298 00:22:42.668 [2024-11-27 06:17:47.625412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.668 [2024-11-27 06:17:47.625445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:42.668 [2024-11-27 06:17:47.638978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef4b08 00:22:42.668 [2024-11-27 06:17:47.640655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.668 [2024-11-27 06:17:47.640701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:42.668 [2024-11-27 06:17:47.654319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef5378 00:22:42.668 [2024-11-27 06:17:47.655925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.668 [2024-11-27 06:17:47.655969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:42.668 [2024-11-27 06:17:47.669779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef5be8 00:22:42.668 [2024-11-27 06:17:47.671579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.668 [2024-11-27 06:17:47.671610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:42.668 [2024-11-27 06:17:47.685308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef6458 00:22:42.668 [2024-11-27 06:17:47.687105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.668 [2024-11-27 06:17:47.687170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:42.668 [2024-11-27 06:17:47.702471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef6cc8 00:22:42.668 [2024-11-27 06:17:47.704199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.668 [2024-11-27 06:17:47.704234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:42.668 [2024-11-27 06:17:47.718969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef7538 00:22:42.668 [2024-11-27 06:17:47.720635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.668 [2024-11-27 06:17:47.720677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:42.668 [2024-11-27 06:17:47.735661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef7da8 00:22:42.668 [2024-11-27 06:17:47.737347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.668 [2024-11-27 06:17:47.737392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:42.668 [2024-11-27 06:17:47.751729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef8618 00:22:42.668 [2024-11-27 06:17:47.753340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.668 [2024-11-27 06:17:47.753368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:42.927 [2024-11-27 06:17:47.767759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef8e88 00:22:42.927 [2024-11-27 06:17:47.769323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.927 [2024-11-27 06:17:47.769354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:42.927 [2024-11-27 06:17:47.783759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef96f8 00:22:42.927 [2024-11-27 06:17:47.785281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.927 [2024-11-27 06:17:47.785310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:42.927 [2024-11-27 06:17:47.799520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa24ae0) with pdu=0x200016ef9f68 00:22:42.927 [2024-11-27 06:17:47.801641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.927 [2024-11-27 06:17:47.801702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:42.927 16003.50 IOPS, 62.51 MiB/s 00:22:42.927 Latency(us) 00:22:42.927 [2024-11-27T06:17:48.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.927 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:42.927 nvme0n1 : 2.01 16024.30 62.59 0.00 0.00 7979.69 2487.39 30742.34 00:22:42.927 [2024-11-27T06:17:48.024Z] =================================================================================================================== 00:22:42.927 [2024-11-27T06:17:48.024Z] Total : 16024.30 62.59 0.00 0.00 7979.69 2487.39 30742.34 00:22:42.927 { 00:22:42.927 "results": [ 00:22:42.927 { 00:22:42.927 "job": "nvme0n1", 00:22:42.927 "core_mask": "0x2", 00:22:42.927 "workload": "randwrite", 00:22:42.927 "status": "finished", 00:22:42.927 "queue_depth": 128, 00:22:42.927 "io_size": 4096, 00:22:42.927 "runtime": 2.005392, 00:22:42.927 "iops": 16024.298491267542, 00:22:42.927 "mibps": 62.594915981513836, 00:22:42.927 "io_failed": 0, 00:22:42.927 "io_timeout": 0, 00:22:42.927 "avg_latency_us": 7979.6907342602935, 00:22:42.927 "min_latency_us": 2487.389090909091, 00:22:42.927 "max_latency_us": 30742.34181818182 00:22:42.927 } 00:22:42.927 ], 00:22:42.927 "core_count": 1 00:22:42.927 } 00:22:42.927 06:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:42.927 06:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:42.927 06:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:42.927 | .driver_specific 00:22:42.927 | .nvme_error 00:22:42.927 | .status_code 00:22:42.927 | .command_transient_transport_error' 00:22:42.927 06:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:43.186 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 126 > 0 )) 00:22:43.186 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80687 00:22:43.186 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80687 ']' 00:22:43.186 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80687 00:22:43.186 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:22:43.186 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:43.186 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80687 00:22:43.186 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:43.186 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:43.186 killing process with pid 80687 00:22:43.186 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80687' 00:22:43.186 Received shutdown signal, test time was about 2.000000 seconds 00:22:43.186 00:22:43.186 Latency(us) 00:22:43.186 [2024-11-27T06:17:48.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.186 [2024-11-27T06:17:48.283Z] =================================================================================================================== 00:22:43.186 [2024-11-27T06:17:48.283Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:43.186 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80687 00:22:43.186 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80687 00:22:43.445 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:22:43.445 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:43.445 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:22:43.445 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:22:43.445 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:22:43.445 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:22:43.445 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80744 00:22:43.445 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80744 /var/tmp/bperf.sock 00:22:43.445 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80744 ']' 00:22:43.445 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:43.445 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.445 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:43.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:43.445 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.445 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:43.445 [2024-11-27 06:17:48.443102] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:22:43.445 [2024-11-27 06:17:48.443366] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80744 ] 00:22:43.445 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:43.445 Zero copy mechanism will not be used. 00:22:43.703 [2024-11-27 06:17:48.585703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.703 [2024-11-27 06:17:48.638062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.703 [2024-11-27 06:17:48.695693] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:43.703 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:43.704 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:22:43.704 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:43.704 06:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:44.269 06:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:44.269 06:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.269 06:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:44.269 06:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.269 06:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:44.269 06:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:44.528 nvme0n1 00:22:44.528 06:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:44.528 06:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.528 06:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:44.528 06:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.528 06:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:44.528 06:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:44.528 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:44.528 Zero copy mechanism will not be used. 00:22:44.528 Running I/O for 2 seconds... 00:22:44.528 [2024-11-27 06:17:49.561804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.528 [2024-11-27 06:17:49.561882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.528 [2024-11-27 06:17:49.561908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:44.528 [2024-11-27 06:17:49.566826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.528 [2024-11-27 06:17:49.567068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.528 [2024-11-27 06:17:49.567091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:44.528 [2024-11-27 06:17:49.571784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.528 [2024-11-27 06:17:49.571850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.528 [2024-11-27 06:17:49.571872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:44.528 [2024-11-27 06:17:49.576505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.528 [2024-11-27 06:17:49.576565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.528 [2024-11-27 06:17:49.576585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:44.528 [2024-11-27 06:17:49.581483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.528 [2024-11-27 06:17:49.581561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.528 [2024-11-27 06:17:49.581584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:44.528 [2024-11-27 06:17:49.586649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.528 [2024-11-27 06:17:49.586941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.528 [2024-11-27 06:17:49.586963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:44.528 [2024-11-27 06:17:49.591889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.528 [2024-11-27 06:17:49.591956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.528 [2024-11-27 06:17:49.591976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:44.528 [2024-11-27 06:17:49.597326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.528 [2024-11-27 06:17:49.597409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.528 [2024-11-27 06:17:49.597432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:44.528 [2024-11-27 06:17:49.602875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.528 [2024-11-27 06:17:49.602979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.529 [2024-11-27 06:17:49.603003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:44.529 [2024-11-27 06:17:49.608232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.529 [2024-11-27 06:17:49.608307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.529 [2024-11-27 06:17:49.608329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:44.529 [2024-11-27 06:17:49.613134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.529 [2024-11-27 06:17:49.613367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.529 [2024-11-27 06:17:49.613389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:44.529 [2024-11-27 06:17:49.618241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.529 [2024-11-27 06:17:49.618325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.529 [2024-11-27 06:17:49.618349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:44.529 [2024-11-27 06:17:49.623546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.529 [2024-11-27 06:17:49.623627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.529 [2024-11-27 06:17:49.623649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:44.789 [2024-11-27 06:17:49.629155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.789 [2024-11-27 06:17:49.629398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.789 [2024-11-27 06:17:49.629420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:44.789 [2024-11-27 06:17:49.634408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.789 [2024-11-27 06:17:49.634483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.789 [2024-11-27 06:17:49.634535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:44.789 [2024-11-27 06:17:49.639598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.639673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.639694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.645038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.645301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.645324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.650107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.650250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.650274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.655330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.655396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.655417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.660392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.660456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.660476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.665586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.665646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.665667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.670604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.670680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.670701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.675465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.675529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.675550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.680520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.680587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.680608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.685684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.685765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.685786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.690864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.690948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.690969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.696028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.696105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.696144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.701578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.701695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.701732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.706712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.706790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.706812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.711789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.711870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.711892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.716886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.716969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.716991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.721857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.721932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.721956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.726857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.726934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.726955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.731830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.731915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.731937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.736741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.736804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.736824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.741937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.742020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.742041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.747078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.747173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.747196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.752283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.752373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.752395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.757712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.757851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.757874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.763143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.763226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.763249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.768534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.790 [2024-11-27 06:17:49.768610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.790 [2024-11-27 06:17:49.768631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:44.790 [2024-11-27 06:17:49.773860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.791 [2024-11-27 06:17:49.773943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.791 [2024-11-27 06:17:49.773985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:44.791 [2024-11-27 06:17:49.779160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.791 [2024-11-27 06:17:49.779257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.791 [2024-11-27 06:17:49.779279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:44.791 [2024-11-27 06:17:49.784324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.791 [2024-11-27 06:17:49.784405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.791 [2024-11-27 06:17:49.784427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:44.791 [2024-11-27 06:17:49.789317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.791 [2024-11-27 06:17:49.789397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.791 [2024-11-27 06:17:49.789417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:44.791 [2024-11-27 06:17:49.794409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.791 [2024-11-27 06:17:49.794509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.791 [2024-11-27 06:17:49.794545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:44.791 [2024-11-27 06:17:49.799478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.791 [2024-11-27 06:17:49.799575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.791 [2024-11-27 06:17:49.799596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:44.791 [2024-11-27 06:17:49.804298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.791 [2024-11-27 06:17:49.804379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.791 [2024-11-27 06:17:49.804400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:44.791 [2024-11-27 06:17:49.809226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.791 [2024-11-27 06:17:49.809308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.791 [2024-11-27 06:17:49.809331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:44.791 [2024-11-27 06:17:49.814152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.791 [2024-11-27 06:17:49.814294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.791 [2024-11-27 06:17:49.814317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:44.791 [2024-11-27 06:17:49.819098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.791 [2024-11-27 06:17:49.819196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.791 [2024-11-27 06:17:49.819216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:44.791 [2024-11-27 06:17:49.823984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.791 [2024-11-27 06:17:49.824117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.791 [2024-11-27 06:17:49.824140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:44.791 [2024-11-27 06:17:49.828874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.791 [2024-11-27 06:17:49.828954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.791 [2024-11-27 06:17:49.828975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:44.791 [2024-11-27 06:17:49.833891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.791 [2024-11-27 06:17:49.833987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.791 [2024-11-27 06:17:49.834008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:44.791 [2024-11-27 06:17:49.838819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.791 [2024-11-27 06:17:49.838899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.791 [2024-11-27 06:17:49.838919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:44.791 [2024-11-27 06:17:49.843622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.791 [2024-11-27 06:17:49.843723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.791 [2024-11-27 06:17:49.843743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:44.791 [2024-11-27 06:17:49.848447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.791 [2024-11-27 06:17:49.848527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.791 [2024-11-27 06:17:49.848548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:44.791 [2024-11-27 06:17:49.853312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.791 [2024-11-27 06:17:49.853391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.791 [2024-11-27 06:17:49.853412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:44.791 [2024-11-27 06:17:49.858024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.791 [2024-11-27 06:17:49.858105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.791 [2024-11-27 06:17:49.858126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:44.791 [2024-11-27 06:17:49.862875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.791 [2024-11-27 06:17:49.862969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.791 [2024-11-27 06:17:49.862991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:44.791 [2024-11-27 06:17:49.867752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.791 [2024-11-27 06:17:49.867831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.791 [2024-11-27 06:17:49.867852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:44.791 [2024-11-27 06:17:49.872652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.791 [2024-11-27 06:17:49.872733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.791 [2024-11-27 06:17:49.872754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:44.791 [2024-11-27 06:17:49.877521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.791 [2024-11-27 06:17:49.877600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.791 [2024-11-27 06:17:49.877620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:44.791 [2024-11-27 06:17:49.882646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:44.791 [2024-11-27 06:17:49.882758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.791 [2024-11-27 06:17:49.882779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.050 [2024-11-27 06:17:49.887864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:49.887977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:49.887997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:49.892870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:49.892961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:49.892982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:49.898252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:49.898355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:49.898378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:49.903547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:49.903628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:49.903649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:49.908449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:49.908529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:49.908550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:49.913362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:49.913443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:49.913465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:49.918297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:49.918384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:49.918406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:49.923125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:49.923246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:49.923299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:49.928241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:49.928324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:49.928360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:49.933109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:49.933200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:49.933220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:49.937959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:49.938041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:49.938061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:49.942736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:49.942835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:49.942870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:49.947491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:49.947573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:49.947593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:49.952519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:49.952616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:49.952637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:49.957307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:49.957390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:49.957410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:49.962139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:49.962244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:49.962266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:49.967096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:49.967191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:49.967212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:49.971866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:49.971946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:49.971983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:49.977491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:49.977576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:49.977599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:49.983378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:49.983471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:49.983494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:49.989158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:49.989239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:49.989262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:49.994831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:49.994911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:49.994934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:50.000454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:50.000534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:50.000557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:50.006329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:50.006409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:50.006432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:50.011932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:50.012016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:50.012040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:50.017664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.051 [2024-11-27 06:17:50.017737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.051 [2024-11-27 06:17:50.017760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.051 [2024-11-27 06:17:50.023430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.052 [2024-11-27 06:17:50.023511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.052 [2024-11-27 06:17:50.023534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.052 [2024-11-27 06:17:50.028961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.052 [2024-11-27 06:17:50.029038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.052 [2024-11-27 06:17:50.029062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.052 [2024-11-27 06:17:50.034678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.052 [2024-11-27 06:17:50.034764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.052 [2024-11-27 06:17:50.034787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.052 [2024-11-27 06:17:50.040393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.052 [2024-11-27 06:17:50.040469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.052 [2024-11-27 06:17:50.040492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.052 [2024-11-27 06:17:50.046002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.052 [2024-11-27 06:17:50.046090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.052 [2024-11-27 06:17:50.046113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.052 [2024-11-27 06:17:50.051331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.052 [2024-11-27 06:17:50.051407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.052 [2024-11-27 06:17:50.051430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.052 [2024-11-27 06:17:50.056957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.052 [2024-11-27 06:17:50.057043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.052 [2024-11-27 06:17:50.057066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.052 [2024-11-27 06:17:50.062584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.052 [2024-11-27 06:17:50.062668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.052 [2024-11-27 06:17:50.062691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.052 [2024-11-27 06:17:50.068368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.052 [2024-11-27 06:17:50.068450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.052 [2024-11-27 06:17:50.068473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.052 [2024-11-27 06:17:50.074050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.052 [2024-11-27 06:17:50.074139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.052 [2024-11-27 06:17:50.074173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.052 [2024-11-27 06:17:50.079437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.052 [2024-11-27 06:17:50.079517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.052 [2024-11-27 06:17:50.079540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.052 [2024-11-27 06:17:50.084879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.052 [2024-11-27 06:17:50.084961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.052 [2024-11-27 06:17:50.084984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.052 [2024-11-27 06:17:50.090248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.052 [2024-11-27 06:17:50.090336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.052 [2024-11-27 06:17:50.090359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.052 [2024-11-27 06:17:50.095793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.052 [2024-11-27 06:17:50.095864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.052 [2024-11-27 06:17:50.095888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.052 [2024-11-27 06:17:50.101536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.052 [2024-11-27 06:17:50.101616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.052 [2024-11-27 06:17:50.101639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.052 [2024-11-27 06:17:50.107166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.052 [2024-11-27 06:17:50.107236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.052 [2024-11-27 06:17:50.107259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.052 [2024-11-27 06:17:50.112725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.052 [2024-11-27 06:17:50.112817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.052 [2024-11-27 06:17:50.112839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.052 [2024-11-27 06:17:50.118236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.052 [2024-11-27 06:17:50.118335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.052 [2024-11-27 06:17:50.118358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.052 [2024-11-27 06:17:50.123997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.052 [2024-11-27 06:17:50.124078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.052 [2024-11-27 06:17:50.124102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.052 [2024-11-27 06:17:50.129748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.052 [2024-11-27 06:17:50.129819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.052 [2024-11-27 06:17:50.129843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.052 [2024-11-27 06:17:50.135199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.052 [2024-11-27 06:17:50.135279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.052 [2024-11-27 06:17:50.135301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.052 [2024-11-27 06:17:50.140785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.052 [2024-11-27 06:17:50.140870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.052 [2024-11-27 06:17:50.140893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.311 [2024-11-27 06:17:50.146518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.311 [2024-11-27 06:17:50.146604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-27 06:17:50.146627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.311 [2024-11-27 06:17:50.152000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.311 [2024-11-27 06:17:50.152076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-27 06:17:50.152099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.311 [2024-11-27 06:17:50.157420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.311 [2024-11-27 06:17:50.157500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-27 06:17:50.157523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.311 [2024-11-27 06:17:50.162827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.311 [2024-11-27 06:17:50.162901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.312 [2024-11-27 06:17:50.162924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.312 [2024-11-27 06:17:50.168361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.312 [2024-11-27 06:17:50.168436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.312 [2024-11-27 06:17:50.168459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.312 [2024-11-27 06:17:50.173827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.312 [2024-11-27 06:17:50.173899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.312 [2024-11-27 06:17:50.173923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.312 [2024-11-27 06:17:50.179437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.312 [2024-11-27 06:17:50.179518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.312 [2024-11-27 06:17:50.179541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.312 [2024-11-27 06:17:50.185266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.312 [2024-11-27 06:17:50.185340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.312 [2024-11-27 06:17:50.185364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.312 [2024-11-27 06:17:50.190778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.312 [2024-11-27 06:17:50.190863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.312 [2024-11-27 06:17:50.190886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.312 [2024-11-27 06:17:50.196243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.312 [2024-11-27 06:17:50.196314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.312 [2024-11-27 06:17:50.196337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.312 [2024-11-27 06:17:50.201947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.312 [2024-11-27 06:17:50.202032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.312 [2024-11-27 06:17:50.202055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.312 [2024-11-27 06:17:50.207487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.312 [2024-11-27 06:17:50.207566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.312 [2024-11-27 06:17:50.207589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.312 [2024-11-27 06:17:50.212900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.312 [2024-11-27 06:17:50.212974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.312 [2024-11-27 06:17:50.212997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.312 [2024-11-27 06:17:50.218504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.312 [2024-11-27 06:17:50.218578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.312 [2024-11-27 06:17:50.218607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.312 [2024-11-27 06:17:50.224295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.312 [2024-11-27 06:17:50.224382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.312 [2024-11-27 06:17:50.224405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.312 [2024-11-27 06:17:50.229945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.312 [2024-11-27 06:17:50.230021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.312 [2024-11-27 06:17:50.230045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.312 [2024-11-27 06:17:50.235733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.312 [2024-11-27 06:17:50.235815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.312 [2024-11-27 06:17:50.235838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.312 [2024-11-27 06:17:50.241190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.312 [2024-11-27 06:17:50.241263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.312 [2024-11-27 06:17:50.241286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.312 [2024-11-27 06:17:50.246621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.312 [2024-11-27 06:17:50.246694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.312 [2024-11-27 06:17:50.246717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.312 [2024-11-27 06:17:50.252120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.312 [2024-11-27 06:17:50.252205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.312 [2024-11-27 06:17:50.252228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.312 [2024-11-27 06:17:50.257674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.312 [2024-11-27 06:17:50.257749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.312 [2024-11-27 06:17:50.257772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.312 [2024-11-27 06:17:50.262942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.312 [2024-11-27 06:17:50.263016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.312 [2024-11-27 06:17:50.263039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.312 [2024-11-27 06:17:50.268579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.312 [2024-11-27 06:17:50.268665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.312 [2024-11-27 06:17:50.268688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.312 [2024-11-27 06:17:50.274217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.312 [2024-11-27 06:17:50.274292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.312 [2024-11-27 06:17:50.274316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.312 [2024-11-27 06:17:50.279754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.312 [2024-11-27 06:17:50.279841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.312 [2024-11-27 06:17:50.279864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.313 [2024-11-27 06:17:50.285483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.313 [2024-11-27 06:17:50.285563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.313 [2024-11-27 06:17:50.285586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.313 [2024-11-27 06:17:50.290933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.313 [2024-11-27 06:17:50.291003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.313 [2024-11-27 06:17:50.291026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.313 [2024-11-27 06:17:50.296513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.313 [2024-11-27 06:17:50.296601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.313 [2024-11-27 06:17:50.296631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.313 [2024-11-27 06:17:50.301952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.313 [2024-11-27 06:17:50.302029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.313 [2024-11-27 06:17:50.302052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.313 [2024-11-27 06:17:50.307743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.313 [2024-11-27 06:17:50.307823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.313 [2024-11-27 06:17:50.307846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.313 [2024-11-27 06:17:50.313500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.313 [2024-11-27 06:17:50.313583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.313 [2024-11-27 06:17:50.313606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.313 [2024-11-27 06:17:50.318954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.313 [2024-11-27 06:17:50.319029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.313 [2024-11-27 06:17:50.319052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.313 [2024-11-27 06:17:50.324645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.313 [2024-11-27 06:17:50.324730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.313 [2024-11-27 06:17:50.324752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.313 [2024-11-27 06:17:50.330336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.313 [2024-11-27 06:17:50.330420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.313 [2024-11-27 06:17:50.330444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.313 [2024-11-27 06:17:50.335690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.313 [2024-11-27 06:17:50.335764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.313 [2024-11-27 06:17:50.335787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.313 [2024-11-27 06:17:50.341229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.313 [2024-11-27 06:17:50.341317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.313 [2024-11-27 06:17:50.341340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.313 [2024-11-27 06:17:50.346850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.313 [2024-11-27 06:17:50.346938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.313 [2024-11-27 06:17:50.346960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.313 [2024-11-27 06:17:50.352298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.313 [2024-11-27 06:17:50.352382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.313 [2024-11-27 06:17:50.352405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.313 [2024-11-27 06:17:50.357572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.313 [2024-11-27 06:17:50.357663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.313 [2024-11-27 06:17:50.357688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.313 [2024-11-27 06:17:50.363086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.313 [2024-11-27 06:17:50.363183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.313 [2024-11-27 06:17:50.363207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.313 [2024-11-27 06:17:50.368635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.313 [2024-11-27 06:17:50.368711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.313 [2024-11-27 06:17:50.368734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.313 [2024-11-27 06:17:50.374311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.313 [2024-11-27 06:17:50.374392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.313 [2024-11-27 06:17:50.374415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.313 [2024-11-27 06:17:50.379864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.313 [2024-11-27 06:17:50.379952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.313 [2024-11-27 06:17:50.379975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.313 [2024-11-27 06:17:50.385476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.313 [2024-11-27 06:17:50.385552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.313 [2024-11-27 06:17:50.385575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.313 [2024-11-27 06:17:50.390900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.313 [2024-11-27 06:17:50.390971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.313 [2024-11-27 06:17:50.390994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.313 [2024-11-27 06:17:50.396487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.313 [2024-11-27 06:17:50.396572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.313 [2024-11-27 06:17:50.396595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.313 [2024-11-27 06:17:50.402179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.313 [2024-11-27 06:17:50.402261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.314 [2024-11-27 06:17:50.402284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.572 [2024-11-27 06:17:50.407651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.572 [2024-11-27 06:17:50.407720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.572 [2024-11-27 06:17:50.407742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.572 [2024-11-27 06:17:50.413205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.572 [2024-11-27 06:17:50.413280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.572 [2024-11-27 06:17:50.413303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.572 [2024-11-27 06:17:50.418707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.572 [2024-11-27 06:17:50.418801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.572 [2024-11-27 06:17:50.418823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.572 [2024-11-27 06:17:50.424514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.572 [2024-11-27 06:17:50.424624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.572 [2024-11-27 06:17:50.424646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.572 [2024-11-27 06:17:50.429907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.573 [2024-11-27 06:17:50.429986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.573 [2024-11-27 06:17:50.430009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.573 [2024-11-27 06:17:50.435318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.573 [2024-11-27 06:17:50.435393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.573 [2024-11-27 06:17:50.435416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.573 [2024-11-27 06:17:50.440735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.573 [2024-11-27 06:17:50.440806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.573 [2024-11-27 06:17:50.440828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.573 [2024-11-27 06:17:50.446406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.573 [2024-11-27 06:17:50.446491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.573 [2024-11-27 06:17:50.446517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.573 [2024-11-27 06:17:50.452086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.573 [2024-11-27 06:17:50.452175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.573 [2024-11-27 06:17:50.452198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.573 [2024-11-27 06:17:50.457571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.573 [2024-11-27 06:17:50.457643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.573 [2024-11-27 06:17:50.457666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.573 [2024-11-27 06:17:50.462936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.573 [2024-11-27 06:17:50.463008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.573 [2024-11-27 06:17:50.463031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.573 [2024-11-27 06:17:50.468710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.573 [2024-11-27 06:17:50.468792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.573 [2024-11-27 06:17:50.468815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.573 [2024-11-27 06:17:50.474295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.573 [2024-11-27 06:17:50.474373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.573 [2024-11-27 06:17:50.474396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.573 [2024-11-27 06:17:50.479885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.573 [2024-11-27 06:17:50.479959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.573 [2024-11-27 06:17:50.479982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.573 [2024-11-27 06:17:50.485629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.573 [2024-11-27 06:17:50.485737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.573 [2024-11-27 06:17:50.485759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.573 [2024-11-27 06:17:50.491249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.573 [2024-11-27 06:17:50.491326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.573 [2024-11-27 06:17:50.491348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.573 [2024-11-27 06:17:50.496642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.573 [2024-11-27 06:17:50.496726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.573 [2024-11-27 06:17:50.496749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.573 [2024-11-27 06:17:50.502302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.573 [2024-11-27 06:17:50.502379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.573 [2024-11-27 06:17:50.502401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.573 [2024-11-27 06:17:50.507703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.573 [2024-11-27 06:17:50.507777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.573 [2024-11-27 06:17:50.507800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.573 [2024-11-27 06:17:50.513156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.573 [2024-11-27 06:17:50.513238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.573 [2024-11-27 06:17:50.513261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.573 [2024-11-27 06:17:50.518677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.573 [2024-11-27 06:17:50.518762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.573 [2024-11-27 06:17:50.518784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.573 [2024-11-27 06:17:50.524322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.573 [2024-11-27 06:17:50.524401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.573 [2024-11-27 06:17:50.524423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.573 [2024-11-27 06:17:50.530145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.573 [2024-11-27 06:17:50.530245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.573 [2024-11-27 06:17:50.530268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.573 [2024-11-27 06:17:50.535703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.573 [2024-11-27 06:17:50.535789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.573 [2024-11-27 06:17:50.535811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.573 [2024-11-27 06:17:50.541092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.573 [2024-11-27 06:17:50.541193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.573 [2024-11-27 06:17:50.541216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.573 [2024-11-27 06:17:50.546357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.573 [2024-11-27 06:17:50.546431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.546454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.551331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.551402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.551424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.556470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.558145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.558193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.574 5776.00 IOPS, 722.00 MiB/s [2024-11-27T06:17:50.671Z] [2024-11-27 06:17:50.562218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.562304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.562327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.566717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.566803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.566825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.571205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.571288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.571324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.575692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.575781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.575803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.580190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.580284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.580307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.584696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.584777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.584800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.588978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.589188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.589222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.593438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.593544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.593566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.597992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.598061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.598084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.602468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.602545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.602567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.606958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.607042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.607065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.611361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.611448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.611471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.615830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.615945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.615967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.620104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.620304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.620339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.624509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.624627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.624649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.628998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.629082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.629105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.633391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.633471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.633493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.637892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.637963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.637986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.642365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.642456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.642479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.646813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.646928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.646952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.651313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.651392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.651415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.655670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.655917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.655950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.659945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.660124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.660171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.574 [2024-11-27 06:17:50.664280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.574 [2024-11-27 06:17:50.664356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.574 [2024-11-27 06:17:50.664378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.834 [2024-11-27 06:17:50.668728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.834 [2024-11-27 06:17:50.668827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.834 [2024-11-27 06:17:50.668850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.834 [2024-11-27 06:17:50.673122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.834 [2024-11-27 06:17:50.673222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.834 [2024-11-27 06:17:50.673250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.834 [2024-11-27 06:17:50.677548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.834 [2024-11-27 06:17:50.677624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.834 [2024-11-27 06:17:50.677646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.834 [2024-11-27 06:17:50.681957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.834 [2024-11-27 06:17:50.682198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.834 [2024-11-27 06:17:50.682230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.834 [2024-11-27 06:17:50.686345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.834 [2024-11-27 06:17:50.686525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.834 [2024-11-27 06:17:50.686558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.834 [2024-11-27 06:17:50.690667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.834 [2024-11-27 06:17:50.690754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.834 [2024-11-27 06:17:50.690777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.834 [2024-11-27 06:17:50.695115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.834 [2024-11-27 06:17:50.695207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.834 [2024-11-27 06:17:50.695230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.834 [2024-11-27 06:17:50.699571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.834 [2024-11-27 06:17:50.699646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.834 [2024-11-27 06:17:50.699669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.834 [2024-11-27 06:17:50.703968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.834 [2024-11-27 06:17:50.704051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.834 [2024-11-27 06:17:50.704073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.834 [2024-11-27 06:17:50.708398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.834 [2024-11-27 06:17:50.708477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.834 [2024-11-27 06:17:50.708500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.834 [2024-11-27 06:17:50.712773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.834 [2024-11-27 06:17:50.712873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.834 [2024-11-27 06:17:50.712896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.834 [2024-11-27 06:17:50.717121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.834 [2024-11-27 06:17:50.717280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.834 [2024-11-27 06:17:50.717309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.834 [2024-11-27 06:17:50.721514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.834 [2024-11-27 06:17:50.721727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.834 [2024-11-27 06:17:50.721752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.834 [2024-11-27 06:17:50.725822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.834 [2024-11-27 06:17:50.725892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.834 [2024-11-27 06:17:50.725915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.834 [2024-11-27 06:17:50.730239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.834 [2024-11-27 06:17:50.730312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.834 [2024-11-27 06:17:50.730335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.834 [2024-11-27 06:17:50.734683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.834 [2024-11-27 06:17:50.734760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.834 [2024-11-27 06:17:50.734782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.835 [2024-11-27 06:17:50.739238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.835 [2024-11-27 06:17:50.739331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.835 [2024-11-27 06:17:50.739354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.835 [2024-11-27 06:17:50.743636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.835 [2024-11-27 06:17:50.743721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.835 [2024-11-27 06:17:50.743743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.835 [2024-11-27 06:17:50.748030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.835 [2024-11-27 06:17:50.748118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.835 [2024-11-27 06:17:50.748155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.835 [2024-11-27 06:17:50.752443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.835 [2024-11-27 06:17:50.752578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.835 [2024-11-27 06:17:50.752602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.835 [2024-11-27 06:17:50.756877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.835 [2024-11-27 06:17:50.757041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.835 [2024-11-27 06:17:50.757074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.835 [2024-11-27 06:17:50.761306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.835 [2024-11-27 06:17:50.761498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.835 [2024-11-27 06:17:50.761530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.835 [2024-11-27 06:17:50.765561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.835 [2024-11-27 06:17:50.765631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.835 [2024-11-27 06:17:50.765654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.835 [2024-11-27 06:17:50.770010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.835 [2024-11-27 06:17:50.770085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.835 [2024-11-27 06:17:50.770107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.835 [2024-11-27 06:17:50.774438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.835 [2024-11-27 06:17:50.774508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.835 [2024-11-27 06:17:50.774530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.835 [2024-11-27 06:17:50.778939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.835 [2024-11-27 06:17:50.779023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.835 [2024-11-27 06:17:50.779045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.835 [2024-11-27 06:17:50.783342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.835 [2024-11-27 06:17:50.783413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.835 [2024-11-27 06:17:50.783436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.835 [2024-11-27 06:17:50.787727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.835 [2024-11-27 06:17:50.787810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.835 [2024-11-27 06:17:50.787832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.835 [2024-11-27 06:17:50.791999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.835 [2024-11-27 06:17:50.792086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.835 [2024-11-27 06:17:50.792108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.835 [2024-11-27 06:17:50.796397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.835 [2024-11-27 06:17:50.796478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.835 [2024-11-27 06:17:50.796501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.835 [2024-11-27 06:17:50.800860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.835 [2024-11-27 06:17:50.800929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.835 [2024-11-27 06:17:50.800951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.835 [2024-11-27 06:17:50.805344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.835 [2024-11-27 06:17:50.805437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.835 [2024-11-27 06:17:50.805459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.835 [2024-11-27 06:17:50.809853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.835 [2024-11-27 06:17:50.809931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.835 [2024-11-27 06:17:50.809953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.835 [2024-11-27 06:17:50.814324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.835 [2024-11-27 06:17:50.814403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.835 [2024-11-27 06:17:50.814426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.835 [2024-11-27 06:17:50.819266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.835 [2024-11-27 06:17:50.819467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.835 [2024-11-27 06:17:50.819499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.835 [2024-11-27 06:17:50.823649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.835 [2024-11-27 06:17:50.823725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.835 [2024-11-27 06:17:50.823748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.835 [2024-11-27 06:17:50.828042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.835 [2024-11-27 06:17:50.828110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.835 [2024-11-27 06:17:50.828148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.835 [2024-11-27 06:17:50.832488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.835 [2024-11-27 06:17:50.832561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.835 [2024-11-27 06:17:50.832584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.835 [2024-11-27 06:17:50.836963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.836 [2024-11-27 06:17:50.837037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.836 [2024-11-27 06:17:50.837059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.836 [2024-11-27 06:17:50.841477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.836 [2024-11-27 06:17:50.841546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.836 [2024-11-27 06:17:50.841568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.836 [2024-11-27 06:17:50.845976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.836 [2024-11-27 06:17:50.846059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.836 [2024-11-27 06:17:50.846082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.836 [2024-11-27 06:17:50.850478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.836 [2024-11-27 06:17:50.850573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.836 [2024-11-27 06:17:50.850596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.836 [2024-11-27 06:17:50.854685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.836 [2024-11-27 06:17:50.854860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.836 [2024-11-27 06:17:50.854893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.836 [2024-11-27 06:17:50.859105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.836 [2024-11-27 06:17:50.859197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.836 [2024-11-27 06:17:50.859219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.836 [2024-11-27 06:17:50.863565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.836 [2024-11-27 06:17:50.863644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.836 [2024-11-27 06:17:50.863667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.836 [2024-11-27 06:17:50.868039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.836 [2024-11-27 06:17:50.868119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.836 [2024-11-27 06:17:50.868156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.836 [2024-11-27 06:17:50.872495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.836 [2024-11-27 06:17:50.872567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.836 [2024-11-27 06:17:50.872590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.836 [2024-11-27 06:17:50.876955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.836 [2024-11-27 06:17:50.877051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.836 [2024-11-27 06:17:50.877074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.836 [2024-11-27 06:17:50.881477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.836 [2024-11-27 06:17:50.881581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.836 [2024-11-27 06:17:50.881604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.836 [2024-11-27 06:17:50.885872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.836 [2024-11-27 06:17:50.886071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.836 [2024-11-27 06:17:50.886105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.836 [2024-11-27 06:17:50.890253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.836 [2024-11-27 06:17:50.890448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.836 [2024-11-27 06:17:50.890480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.836 [2024-11-27 06:17:50.894739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.836 [2024-11-27 06:17:50.894878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.836 [2024-11-27 06:17:50.894899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.836 [2024-11-27 06:17:50.899154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.836 [2024-11-27 06:17:50.899237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.836 [2024-11-27 06:17:50.899259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.836 [2024-11-27 06:17:50.903702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.836 [2024-11-27 06:17:50.903779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.836 [2024-11-27 06:17:50.903801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.836 [2024-11-27 06:17:50.908162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.836 [2024-11-27 06:17:50.908248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.836 [2024-11-27 06:17:50.908270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.836 [2024-11-27 06:17:50.912627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.836 [2024-11-27 06:17:50.912723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.836 [2024-11-27 06:17:50.912745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.836 [2024-11-27 06:17:50.917112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.836 [2024-11-27 06:17:50.917211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.836 [2024-11-27 06:17:50.917233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.836 [2024-11-27 06:17:50.921366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.836 [2024-11-27 06:17:50.921552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.836 [2024-11-27 06:17:50.921585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:45.836 [2024-11-27 06:17:50.925784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:45.836 [2024-11-27 06:17:50.925859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.836 [2024-11-27 06:17:50.925881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.097 [2024-11-27 06:17:50.930251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.097 [2024-11-27 06:17:50.930349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.097 [2024-11-27 06:17:50.930377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.097 [2024-11-27 06:17:50.934692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.097 [2024-11-27 06:17:50.934764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.097 [2024-11-27 06:17:50.934787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.097 [2024-11-27 06:17:50.939098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.097 [2024-11-27 06:17:50.939206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.097 [2024-11-27 06:17:50.939229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.097 [2024-11-27 06:17:50.943521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.097 [2024-11-27 06:17:50.943711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.097 [2024-11-27 06:17:50.943750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.097 [2024-11-27 06:17:50.947903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.097 [2024-11-27 06:17:50.948072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.097 [2024-11-27 06:17:50.948111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.097 [2024-11-27 06:17:50.952335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.097 [2024-11-27 06:17:50.952513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.097 [2024-11-27 06:17:50.952553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.097 [2024-11-27 06:17:50.956633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.097 [2024-11-27 06:17:50.956718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.097 [2024-11-27 06:17:50.956742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.097 [2024-11-27 06:17:50.961093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.097 [2024-11-27 06:17:50.961182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.097 [2024-11-27 06:17:50.961210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.097 [2024-11-27 06:17:50.965504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.097 [2024-11-27 06:17:50.965585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.097 [2024-11-27 06:17:50.965613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.097 [2024-11-27 06:17:50.969896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.097 [2024-11-27 06:17:50.969969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.097 [2024-11-27 06:17:50.969996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.097 [2024-11-27 06:17:50.974281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.097 [2024-11-27 06:17:50.974427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.097 [2024-11-27 06:17:50.974461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.097 [2024-11-27 06:17:50.978747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.097 [2024-11-27 06:17:50.978994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.097 [2024-11-27 06:17:50.979027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.097 [2024-11-27 06:17:50.983000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.097 [2024-11-27 06:17:50.983077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.097 [2024-11-27 06:17:50.983097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.097 [2024-11-27 06:17:50.987373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.097 [2024-11-27 06:17:50.987468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.097 [2024-11-27 06:17:50.987488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.097 [2024-11-27 06:17:50.991904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.097 [2024-11-27 06:17:50.991998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.097 [2024-11-27 06:17:50.992018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.097 [2024-11-27 06:17:50.996340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.097 [2024-11-27 06:17:50.996433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.097 [2024-11-27 06:17:50.996454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.097 [2024-11-27 06:17:51.000819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.097 [2024-11-27 06:17:51.000901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.097 [2024-11-27 06:17:51.000921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.097 [2024-11-27 06:17:51.005214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.005314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.005334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.009595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.009691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.009712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.013833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.013943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.013964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.018073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.018231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.018253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.022860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.022948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.022985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.027259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.027370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.027390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.031742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.031837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.031857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.035902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.036081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.036102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.040196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.040350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.040387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.044417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.044608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.044646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.048537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.048613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.048632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.052774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.052853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.052873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.057056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.057140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.057162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.061576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.061695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.061716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.066020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.066110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.066129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.070750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.070864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.070884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.075708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.075865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.075901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.080186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.080287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.080308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.084467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.084582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.084601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.088870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.088993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.089013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.093723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.093874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.093894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.098482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.098739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.098766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.102903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.103016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.103035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.107145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.107282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.107330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.111467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.111576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.111596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.115688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.115841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.115878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.119986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.098 [2024-11-27 06:17:51.120066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.098 [2024-11-27 06:17:51.120086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.098 [2024-11-27 06:17:51.124272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.099 [2024-11-27 06:17:51.124359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.099 [2024-11-27 06:17:51.124403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.099 [2024-11-27 06:17:51.128705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.099 [2024-11-27 06:17:51.128824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.099 [2024-11-27 06:17:51.128844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.099 [2024-11-27 06:17:51.133087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.099 [2024-11-27 06:17:51.133179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.099 [2024-11-27 06:17:51.133214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.099 [2024-11-27 06:17:51.137100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.099 [2024-11-27 06:17:51.137326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.099 [2024-11-27 06:17:51.137348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.099 [2024-11-27 06:17:51.141329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.099 [2024-11-27 06:17:51.141422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.099 [2024-11-27 06:17:51.141442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.099 [2024-11-27 06:17:51.145451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.099 [2024-11-27 06:17:51.145545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.099 [2024-11-27 06:17:51.145564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.099 [2024-11-27 06:17:51.149663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.099 [2024-11-27 06:17:51.149740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.099 [2024-11-27 06:17:51.149760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.099 [2024-11-27 06:17:51.153783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.099 [2024-11-27 06:17:51.153861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.099 [2024-11-27 06:17:51.153882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.099 [2024-11-27 06:17:51.158053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.099 [2024-11-27 06:17:51.158129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.099 [2024-11-27 06:17:51.158188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.099 [2024-11-27 06:17:51.162469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.099 [2024-11-27 06:17:51.162588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.099 [2024-11-27 06:17:51.162609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.099 [2024-11-27 06:17:51.166830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.099 [2024-11-27 06:17:51.167034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.099 [2024-11-27 06:17:51.167055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.099 [2024-11-27 06:17:51.171068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.099 [2024-11-27 06:17:51.171173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.099 [2024-11-27 06:17:51.171194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.099 [2024-11-27 06:17:51.175444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.099 [2024-11-27 06:17:51.175552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.099 [2024-11-27 06:17:51.175588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.099 [2024-11-27 06:17:51.179910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.099 [2024-11-27 06:17:51.179990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.099 [2024-11-27 06:17:51.180010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.099 [2024-11-27 06:17:51.184272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.099 [2024-11-27 06:17:51.184350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.099 [2024-11-27 06:17:51.184370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.099 [2024-11-27 06:17:51.188815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.099 [2024-11-27 06:17:51.188913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.099 [2024-11-27 06:17:51.188949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.360 [2024-11-27 06:17:51.193656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.360 [2024-11-27 06:17:51.193735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.360 [2024-11-27 06:17:51.193754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.360 [2024-11-27 06:17:51.198136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.360 [2024-11-27 06:17:51.198303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.360 [2024-11-27 06:17:51.198325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.360 [2024-11-27 06:17:51.202465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.360 [2024-11-27 06:17:51.202600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.360 [2024-11-27 06:17:51.202621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.360 [2024-11-27 06:17:51.206626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.360 [2024-11-27 06:17:51.206864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.360 [2024-11-27 06:17:51.206886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.360 [2024-11-27 06:17:51.210856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.360 [2024-11-27 06:17:51.210959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.360 [2024-11-27 06:17:51.210979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.360 [2024-11-27 06:17:51.215098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.360 [2024-11-27 06:17:51.215215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.360 [2024-11-27 06:17:51.215252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.360 [2024-11-27 06:17:51.219291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.360 [2024-11-27 06:17:51.219370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.360 [2024-11-27 06:17:51.219390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.360 [2024-11-27 06:17:51.223530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.360 [2024-11-27 06:17:51.223630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.360 [2024-11-27 06:17:51.223649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.360 [2024-11-27 06:17:51.227935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.360 [2024-11-27 06:17:51.228049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.360 [2024-11-27 06:17:51.228069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.360 [2024-11-27 06:17:51.232400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.360 [2024-11-27 06:17:51.232518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.360 [2024-11-27 06:17:51.232538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.360 [2024-11-27 06:17:51.236738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.360 [2024-11-27 06:17:51.236837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.360 [2024-11-27 06:17:51.236857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.360 [2024-11-27 06:17:51.241188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.360 [2024-11-27 06:17:51.241284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.360 [2024-11-27 06:17:51.241304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.360 [2024-11-27 06:17:51.245847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.360 [2024-11-27 06:17:51.245944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.360 [2024-11-27 06:17:51.245982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.360 [2024-11-27 06:17:51.250568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.360 [2024-11-27 06:17:51.250684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.360 [2024-11-27 06:17:51.250706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.360 [2024-11-27 06:17:51.255633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.360 [2024-11-27 06:17:51.255907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.360 [2024-11-27 06:17:51.255944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.260187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.260310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.260361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.265034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.265123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.265146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.269719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.269830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.269851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.274631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.274749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.274770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.279600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.279725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.279747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.284541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.284651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.284671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.289683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.289819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.289842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.294396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.294573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.294595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.298977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.299104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.299124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.303680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.303763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.303784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.308090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.308193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.308214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.312705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.312799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.312820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.317137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.317213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.317234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.321634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.321723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.321743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.326129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.326258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.326279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.331066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.331325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.331371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.335835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.336063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.336108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.339888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.340043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.340063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.344096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.344205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.344225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.348550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.348660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.348681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.353188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.353295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.353316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.357687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.357763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.357783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.361784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.361873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.361893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.365954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.366066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.366087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.370523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.370616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.370637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.374913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.375120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.375159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.361 [2024-11-27 06:17:51.379098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.361 [2024-11-27 06:17:51.379201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.361 [2024-11-27 06:17:51.379222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.362 [2024-11-27 06:17:51.383613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.362 [2024-11-27 06:17:51.383694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.362 [2024-11-27 06:17:51.383715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.362 [2024-11-27 06:17:51.388228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.362 [2024-11-27 06:17:51.388310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.362 [2024-11-27 06:17:51.388332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.362 [2024-11-27 06:17:51.393060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.362 [2024-11-27 06:17:51.393189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.362 [2024-11-27 06:17:51.393225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.362 [2024-11-27 06:17:51.397861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.362 [2024-11-27 06:17:51.397953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.362 [2024-11-27 06:17:51.397975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.362 [2024-11-27 06:17:51.402784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.362 [2024-11-27 06:17:51.402894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.362 [2024-11-27 06:17:51.402915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.362 [2024-11-27 06:17:51.407562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.362 [2024-11-27 06:17:51.407787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.362 [2024-11-27 06:17:51.407808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.362 [2024-11-27 06:17:51.412024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.362 [2024-11-27 06:17:51.412120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.362 [2024-11-27 06:17:51.412155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.362 [2024-11-27 06:17:51.416737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.362 [2024-11-27 06:17:51.416837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.362 [2024-11-27 06:17:51.416858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.362 [2024-11-27 06:17:51.421364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.362 [2024-11-27 06:17:51.421470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.362 [2024-11-27 06:17:51.421491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.362 [2024-11-27 06:17:51.425993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.362 [2024-11-27 06:17:51.426083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.362 [2024-11-27 06:17:51.426104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.362 [2024-11-27 06:17:51.430624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.362 [2024-11-27 06:17:51.430744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.362 [2024-11-27 06:17:51.430764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.362 [2024-11-27 06:17:51.435021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.362 [2024-11-27 06:17:51.435102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.362 [2024-11-27 06:17:51.435122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.362 [2024-11-27 06:17:51.439645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.362 [2024-11-27 06:17:51.439758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.362 [2024-11-27 06:17:51.439779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.362 [2024-11-27 06:17:51.444251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.362 [2024-11-27 06:17:51.444537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.362 [2024-11-27 06:17:51.444568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.362 [2024-11-27 06:17:51.448644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.362 [2024-11-27 06:17:51.448738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.362 [2024-11-27 06:17:51.448758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.362 [2024-11-27 06:17:51.453573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.362 [2024-11-27 06:17:51.453677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.362 [2024-11-27 06:17:51.453707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.622 [2024-11-27 06:17:51.458308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.622 [2024-11-27 06:17:51.458379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-27 06:17:51.458401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.622 [2024-11-27 06:17:51.463095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.622 [2024-11-27 06:17:51.463210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-27 06:17:51.463245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.622 [2024-11-27 06:17:51.467509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.622 [2024-11-27 06:17:51.467609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-27 06:17:51.467629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.622 [2024-11-27 06:17:51.472075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.622 [2024-11-27 06:17:51.472162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-27 06:17:51.472186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.622 [2024-11-27 06:17:51.476429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.622 [2024-11-27 06:17:51.476578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-27 06:17:51.476598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.622 [2024-11-27 06:17:51.480696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.622 [2024-11-27 06:17:51.480857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-27 06:17:51.480878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.622 [2024-11-27 06:17:51.485248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.622 [2024-11-27 06:17:51.485499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-27 06:17:51.485540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.622 [2024-11-27 06:17:51.489320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.622 [2024-11-27 06:17:51.489450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-27 06:17:51.489471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.622 [2024-11-27 06:17:51.493629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.622 [2024-11-27 06:17:51.493745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-27 06:17:51.493767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.622 [2024-11-27 06:17:51.497978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.623 [2024-11-27 06:17:51.498086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-27 06:17:51.498107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.623 [2024-11-27 06:17:51.502236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.623 [2024-11-27 06:17:51.502340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-27 06:17:51.502361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.623 [2024-11-27 06:17:51.506798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.623 [2024-11-27 06:17:51.506906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-27 06:17:51.506926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.623 [2024-11-27 06:17:51.511068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.623 [2024-11-27 06:17:51.511180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-27 06:17:51.511215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.623 [2024-11-27 06:17:51.515463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.623 [2024-11-27 06:17:51.515576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-27 06:17:51.515597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.623 [2024-11-27 06:17:51.519868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.623 [2024-11-27 06:17:51.519976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-27 06:17:51.519996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.623 [2024-11-27 06:17:51.524303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.623 [2024-11-27 06:17:51.524397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-27 06:17:51.524418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.623 [2024-11-27 06:17:51.528733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.623 [2024-11-27 06:17:51.528907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-27 06:17:51.528928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.623 [2024-11-27 06:17:51.533119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.623 [2024-11-27 06:17:51.533221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-27 06:17:51.533240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.623 [2024-11-27 06:17:51.537657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.623 [2024-11-27 06:17:51.537739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-27 06:17:51.537758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.623 [2024-11-27 06:17:51.541893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.623 [2024-11-27 06:17:51.541985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-27 06:17:51.542005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.623 [2024-11-27 06:17:51.546128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.623 [2024-11-27 06:17:51.546303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-27 06:17:51.546324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:46.623 [2024-11-27 06:17:51.550724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.623 [2024-11-27 06:17:51.550822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-27 06:17:51.550859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.623 [2024-11-27 06:17:51.555120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa115b0) with pdu=0x200016eff3c8 00:22:46.623 [2024-11-27 06:17:51.555266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-27 06:17:51.555287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.623 6363.50 IOPS, 795.44 MiB/s 00:22:46.623 Latency(us) 00:22:46.623 [2024-11-27T06:17:51.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.623 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:46.623 nvme0n1 : 2.00 6361.54 795.19 0.00 0.00 2509.22 1690.53 7983.48 00:22:46.623 [2024-11-27T06:17:51.720Z] =================================================================================================================== 00:22:46.623 [2024-11-27T06:17:51.720Z] Total : 6361.54 795.19 0.00 0.00 2509.22 1690.53 7983.48 00:22:46.623 { 00:22:46.623 "results": [ 00:22:46.623 { 00:22:46.623 "job": "nvme0n1", 00:22:46.623 "core_mask": "0x2", 00:22:46.623 "workload": "randwrite", 00:22:46.623 "status": "finished", 00:22:46.623 "queue_depth": 16, 00:22:46.623 "io_size": 131072, 00:22:46.623 "runtime": 2.003917, 00:22:46.623 "iops": 6361.540922104059, 00:22:46.623 "mibps": 795.1926152630074, 00:22:46.623 "io_failed": 0, 00:22:46.623 "io_timeout": 0, 00:22:46.623 "avg_latency_us": 2509.2162364149813, 00:22:46.623 "min_latency_us": 1690.530909090909, 00:22:46.623 "max_latency_us": 7983.476363636363 00:22:46.623 } 00:22:46.623 ], 00:22:46.623 "core_count": 1 00:22:46.623 } 00:22:46.623 06:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:46.623 06:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:46.623 06:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:46.623 | .driver_specific 00:22:46.623 | .nvme_error 00:22:46.623 | .status_code 00:22:46.623 | .command_transient_transport_error' 00:22:46.623 06:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:46.883 06:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 411 > 0 )) 00:22:46.883 06:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80744 00:22:46.883 06:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80744 ']' 00:22:46.883 06:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80744 00:22:46.883 06:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:22:46.883 06:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:46.883 06:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80744 00:22:46.883 killing process with pid 80744 00:22:46.883 Received shutdown signal, test time was about 2.000000 seconds 00:22:46.883 00:22:46.883 Latency(us) 00:22:46.883 [2024-11-27T06:17:51.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.883 [2024-11-27T06:17:51.980Z] =================================================================================================================== 00:22:46.883 [2024-11-27T06:17:51.980Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:46.883 06:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:46.883 06:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:46.883 06:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80744' 00:22:46.883 06:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80744 00:22:46.883 06:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80744 00:22:47.142 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80548 00:22:47.143 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80548 ']' 00:22:47.143 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80548 00:22:47.143 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:22:47.143 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.143 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80548 00:22:47.143 killing process with pid 80548 00:22:47.143 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:47.143 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:47.143 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80548' 00:22:47.143 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80548 00:22:47.143 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80548 00:22:47.402 ************************************ 00:22:47.402 END TEST nvmf_digest_error 00:22:47.402 ************************************ 00:22:47.402 00:22:47.402 real 0m16.498s 00:22:47.402 user 0m31.594s 00:22:47.402 sys 0m4.689s 00:22:47.402 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:47.403 rmmod nvme_tcp 00:22:47.403 rmmod nvme_fabrics 00:22:47.403 rmmod nvme_keyring 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80548 ']' 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80548 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 80548 ']' 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 80548 00:22:47.403 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80548) - No such process 00:22:47.403 Process with pid 80548 is not found 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 80548 is not found' 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:47.403 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:22:47.683 00:22:47.683 real 0m34.184s 00:22:47.683 user 1m3.099s 00:22:47.683 sys 0m10.755s 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:47.683 ************************************ 00:22:47.683 END TEST nvmf_digest 00:22:47.683 ************************************ 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:47.683 06:17:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.941 ************************************ 00:22:47.942 START TEST nvmf_host_multipath 00:22:47.942 ************************************ 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:22:47.942 * Looking for test storage... 00:22:47.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:47.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.942 --rc genhtml_branch_coverage=1 00:22:47.942 --rc genhtml_function_coverage=1 00:22:47.942 --rc genhtml_legend=1 00:22:47.942 --rc geninfo_all_blocks=1 00:22:47.942 --rc geninfo_unexecuted_blocks=1 00:22:47.942 00:22:47.942 ' 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:47.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.942 --rc genhtml_branch_coverage=1 00:22:47.942 --rc genhtml_function_coverage=1 00:22:47.942 --rc genhtml_legend=1 00:22:47.942 --rc geninfo_all_blocks=1 00:22:47.942 --rc geninfo_unexecuted_blocks=1 00:22:47.942 00:22:47.942 ' 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:47.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.942 --rc genhtml_branch_coverage=1 00:22:47.942 --rc genhtml_function_coverage=1 00:22:47.942 --rc genhtml_legend=1 00:22:47.942 --rc geninfo_all_blocks=1 00:22:47.942 --rc geninfo_unexecuted_blocks=1 00:22:47.942 00:22:47.942 ' 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:47.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.942 --rc genhtml_branch_coverage=1 00:22:47.942 --rc genhtml_function_coverage=1 00:22:47.942 --rc genhtml_legend=1 00:22:47.942 --rc geninfo_all_blocks=1 00:22:47.942 --rc geninfo_unexecuted_blocks=1 00:22:47.942 00:22:47.942 ' 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:47.942 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:47.942 06:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:47.942 Cannot find device "nvmf_init_br" 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:22:47.942 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:47.942 Cannot find device "nvmf_init_br2" 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:48.200 Cannot find device "nvmf_tgt_br" 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:48.200 Cannot find device "nvmf_tgt_br2" 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:48.200 Cannot find device "nvmf_init_br" 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:48.200 Cannot find device "nvmf_init_br2" 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:48.200 Cannot find device "nvmf_tgt_br" 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:48.200 Cannot find device "nvmf_tgt_br2" 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:48.200 Cannot find device "nvmf_br" 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:48.200 Cannot find device "nvmf_init_if" 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:48.200 Cannot find device "nvmf_init_if2" 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:48.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:48.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:48.200 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:48.459 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:48.459 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:22:48.459 00:22:48.459 --- 10.0.0.3 ping statistics --- 00:22:48.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.459 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:48.459 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:48.459 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:22:48.459 00:22:48.459 --- 10.0.0.4 ping statistics --- 00:22:48.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.459 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:48.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:22:48.459 00:22:48.459 --- 10.0.0.1 ping statistics --- 00:22:48.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.459 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:48.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:22:48.459 00:22:48.459 --- 10.0.0.2 ping statistics --- 00:22:48.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.459 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=81051 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:48.459 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 81051 00:22:48.460 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 81051 ']' 00:22:48.460 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.460 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.460 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.460 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.460 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:48.460 [2024-11-27 06:17:53.508672] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:22:48.460 [2024-11-27 06:17:53.508761] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.717 [2024-11-27 06:17:53.665339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:48.717 [2024-11-27 06:17:53.726248] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.717 [2024-11-27 06:17:53.726315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.717 [2024-11-27 06:17:53.726330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.717 [2024-11-27 06:17:53.726344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.717 [2024-11-27 06:17:53.726353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.717 [2024-11-27 06:17:53.727734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.717 [2024-11-27 06:17:53.727748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.717 [2024-11-27 06:17:53.789576] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:48.975 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.975 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:22:48.975 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:48.975 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:48.975 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:48.975 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.975 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=81051 00:22:48.975 06:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:49.277 [2024-11-27 06:17:54.192501] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.277 06:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:49.534 Malloc0 00:22:49.534 06:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:49.793 06:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:50.050 06:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:50.308 [2024-11-27 06:17:55.295712] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:50.308 06:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:50.566 [2024-11-27 06:17:55.528059] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:50.566 06:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81096 00:22:50.566 06:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:50.566 06:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:50.566 06:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81096 /var/tmp/bdevperf.sock 00:22:50.566 06:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 81096 ']' 00:22:50.566 06:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.566 06:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.566 06:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.567 06:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.567 06:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:51.545 06:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.545 06:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:22:51.545 06:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:51.802 06:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:52.368 Nvme0n1 00:22:52.368 06:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:52.625 Nvme0n1 00:22:52.625 06:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:22:52.625 06:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:53.558 06:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:22:53.558 06:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:53.817 06:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:54.075 06:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:22:54.333 06:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81141 00:22:54.333 06:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81051 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:54.334 06:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:00.894 06:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:00.894 06:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:00.894 06:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:23:00.894 06:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:00.894 Attaching 4 probes... 00:23:00.894 @path[10.0.0.3, 4421]: 17738 00:23:00.894 @path[10.0.0.3, 4421]: 18705 00:23:00.894 @path[10.0.0.3, 4421]: 18843 00:23:00.894 @path[10.0.0.3, 4421]: 18760 00:23:00.894 @path[10.0.0.3, 4421]: 18486 00:23:00.894 06:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:00.894 06:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:00.894 06:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:00.894 06:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:23:00.894 06:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:00.894 06:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:00.894 06:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81141 00:23:00.894 06:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:00.894 06:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:00.894 06:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:00.894 06:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:01.152 06:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:01.152 06:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81260 00:23:01.152 06:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:01.152 06:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81051 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:07.766 06:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:07.766 06:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:07.766 06:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:23:07.766 06:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:07.766 Attaching 4 probes... 00:23:07.767 @path[10.0.0.3, 4420]: 17351 00:23:07.767 @path[10.0.0.3, 4420]: 16125 00:23:07.767 @path[10.0.0.3, 4420]: 15950 00:23:07.767 @path[10.0.0.3, 4420]: 15792 00:23:07.767 @path[10.0.0.3, 4420]: 15767 00:23:07.767 06:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:07.767 06:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:07.767 06:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:07.767 06:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:23:07.767 06:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:07.767 06:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:07.767 06:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81260 00:23:07.767 06:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:07.767 06:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:07.767 06:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:07.767 06:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:08.025 06:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:08.025 06:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81377 00:23:08.025 06:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81051 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:08.025 06:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:14.591 06:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:14.591 06:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:14.591 06:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:23:14.591 06:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:14.591 Attaching 4 probes... 00:23:14.591 @path[10.0.0.3, 4421]: 15219 00:23:14.591 @path[10.0.0.3, 4421]: 18272 00:23:14.591 @path[10.0.0.3, 4421]: 18349 00:23:14.591 @path[10.0.0.3, 4421]: 18513 00:23:14.591 @path[10.0.0.3, 4421]: 18432 00:23:14.591 06:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:14.591 06:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:14.591 06:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:14.591 06:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:23:14.591 06:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:14.591 06:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:14.591 06:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81377 00:23:14.591 06:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:14.591 06:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:23:14.591 06:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:14.591 06:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:14.850 06:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:23:14.850 06:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81485 00:23:14.850 06:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:14.850 06:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81051 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:21.412 06:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:21.412 06:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:23:21.412 06:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:23:21.412 06:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:21.412 Attaching 4 probes... 00:23:21.412 00:23:21.412 00:23:21.412 00:23:21.412 00:23:21.412 00:23:21.412 06:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:21.412 06:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:21.412 06:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:21.412 06:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:23:21.412 06:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:23:21.412 06:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:23:21.412 06:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81485 00:23:21.412 06:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:21.412 06:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:23:21.412 06:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:21.412 06:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:21.672 06:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:23:21.672 06:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81604 00:23:21.672 06:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:21.672 06:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81051 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:28.235 06:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:28.235 06:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:28.235 06:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:23:28.235 06:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:28.235 Attaching 4 probes... 00:23:28.235 @path[10.0.0.3, 4421]: 18983 00:23:28.235 @path[10.0.0.3, 4421]: 17385 00:23:28.235 @path[10.0.0.3, 4421]: 16840 00:23:28.235 @path[10.0.0.3, 4421]: 17147 00:23:28.235 @path[10.0.0.3, 4421]: 17123 00:23:28.235 06:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:28.235 06:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:28.235 06:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:28.235 06:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:23:28.235 06:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:28.235 06:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:28.235 06:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81604 00:23:28.235 06:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:28.235 06:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:23:28.235 06:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:23:29.171 06:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:23:29.171 06:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81722 00:23:29.171 06:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81051 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:29.171 06:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:35.735 06:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:35.735 06:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:35.735 06:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:23:35.735 06:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:35.735 Attaching 4 probes... 00:23:35.735 @path[10.0.0.3, 4420]: 16832 00:23:35.735 @path[10.0.0.3, 4420]: 16920 00:23:35.735 @path[10.0.0.3, 4420]: 16908 00:23:35.735 @path[10.0.0.3, 4420]: 15794 00:23:35.735 @path[10.0.0.3, 4420]: 17881 00:23:35.735 06:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:35.735 06:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:35.735 06:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:35.735 06:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:23:35.735 06:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:35.735 06:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:35.735 06:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81722 00:23:35.735 06:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:35.735 06:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:23:35.735 [2024-11-27 06:18:40.752905] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:35.735 06:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:35.994 06:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:23:42.556 06:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:23:42.556 06:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81903 00:23:42.556 06:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81051 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:42.556 06:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:49.127 Attaching 4 probes... 00:23:49.127 @path[10.0.0.3, 4421]: 17514 00:23:49.127 @path[10.0.0.3, 4421]: 17952 00:23:49.127 @path[10.0.0.3, 4421]: 17384 00:23:49.127 @path[10.0.0.3, 4421]: 17320 00:23:49.127 @path[10.0.0.3, 4421]: 17403 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81903 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81096 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 81096 ']' 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 81096 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81096 00:23:49.127 killing process with pid 81096 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81096' 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 81096 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 81096 00:23:49.127 { 00:23:49.127 "results": [ 00:23:49.127 { 00:23:49.127 "job": "Nvme0n1", 00:23:49.127 "core_mask": "0x4", 00:23:49.127 "workload": "verify", 00:23:49.127 "status": "terminated", 00:23:49.127 "verify_range": { 00:23:49.127 "start": 0, 00:23:49.127 "length": 16384 00:23:49.127 }, 00:23:49.127 "queue_depth": 128, 00:23:49.127 "io_size": 4096, 00:23:49.127 "runtime": 55.780077, 00:23:49.127 "iops": 7601.101016766255, 00:23:49.127 "mibps": 29.691800846743185, 00:23:49.127 "io_failed": 0, 00:23:49.127 "io_timeout": 0, 00:23:49.127 "avg_latency_us": 16809.063058897187, 00:23:49.127 "min_latency_us": 636.7418181818182, 00:23:49.127 "max_latency_us": 7046430.72 00:23:49.127 } 00:23:49.127 ], 00:23:49.127 "core_count": 1 00:23:49.127 } 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81096 00:23:49.127 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:49.127 [2024-11-27 06:17:55.596068] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:23:49.127 [2024-11-27 06:17:55.596207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81096 ] 00:23:49.127 [2024-11-27 06:17:55.745996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.127 [2024-11-27 06:17:55.819576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:49.127 [2024-11-27 06:17:55.884258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:49.127 Running I/O for 90 seconds... 00:23:49.127 9605.00 IOPS, 37.52 MiB/s [2024-11-27T06:18:54.224Z] 9584.00 IOPS, 37.44 MiB/s [2024-11-27T06:18:54.224Z] 9472.00 IOPS, 37.00 MiB/s [2024-11-27T06:18:54.224Z] 9410.00 IOPS, 36.76 MiB/s [2024-11-27T06:18:54.224Z] 9423.80 IOPS, 36.81 MiB/s [2024-11-27T06:18:54.224Z] 9410.50 IOPS, 36.76 MiB/s [2024-11-27T06:18:54.224Z] 9399.71 IOPS, 36.72 MiB/s [2024-11-27T06:18:54.224Z] 9389.75 IOPS, 36.68 MiB/s [2024-11-27T06:18:54.224Z] [2024-11-27 06:18:05.977530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.127 [2024-11-27 06:18:05.977627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:49.127 [2024-11-27 06:18:05.977691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.127 [2024-11-27 06:18:05.977715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:49.127 [2024-11-27 06:18:05.977740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.127 [2024-11-27 06:18:05.977758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:49.127 [2024-11-27 06:18:05.977782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.127 [2024-11-27 06:18:05.977800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:49.127 [2024-11-27 06:18:05.977822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.127 [2024-11-27 06:18:05.977841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:49.127 [2024-11-27 06:18:05.977864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.127 [2024-11-27 06:18:05.977882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:49.127 [2024-11-27 06:18:05.977905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.127 [2024-11-27 06:18:05.977923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:49.127 [2024-11-27 06:18:05.977946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.127 [2024-11-27 06:18:05.977964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:49.127 [2024-11-27 06:18:05.977986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.127 [2024-11-27 06:18:05.978005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:49.127 [2024-11-27 06:18:05.978028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.127 [2024-11-27 06:18:05.978085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:49.127 [2024-11-27 06:18:05.978111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.127 [2024-11-27 06:18:05.978145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.978196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.128 [2024-11-27 06:18:05.978216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.978238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.128 [2024-11-27 06:18:05.978259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.978281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.128 [2024-11-27 06:18:05.978298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.978320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.128 [2024-11-27 06:18:05.978337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.978359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.128 [2024-11-27 06:18:05.978376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.978401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.128 [2024-11-27 06:18:05.978418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.978444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.128 [2024-11-27 06:18:05.978462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.978484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.128 [2024-11-27 06:18:05.978501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.978523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.128 [2024-11-27 06:18:05.978541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.978564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.128 [2024-11-27 06:18:05.978580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.978603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.128 [2024-11-27 06:18:05.978632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.978657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.128 [2024-11-27 06:18:05.978674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.978697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.128 [2024-11-27 06:18:05.978714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.978744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.128 [2024-11-27 06:18:05.978762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.978786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.128 [2024-11-27 06:18:05.978803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.978826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.128 [2024-11-27 06:18:05.978843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.978866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.128 [2024-11-27 06:18:05.978883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.978906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.128 [2024-11-27 06:18:05.978923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.978945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.128 [2024-11-27 06:18:05.978962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.978985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.128 [2024-11-27 06:18:05.979002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.979025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.128 [2024-11-27 06:18:05.979041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.979063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.128 [2024-11-27 06:18:05.979081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.979107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.128 [2024-11-27 06:18:05.979125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.979192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.128 [2024-11-27 06:18:05.979212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.979235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.128 [2024-11-27 06:18:05.979254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.979277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.128 [2024-11-27 06:18:05.979294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.979318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.128 [2024-11-27 06:18:05.979336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.979359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.128 [2024-11-27 06:18:05.979377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.979401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.128 [2024-11-27 06:18:05.979418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.979441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.128 [2024-11-27 06:18:05.979459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.979482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.128 [2024-11-27 06:18:05.979499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.979522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.128 [2024-11-27 06:18:05.979556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.979579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.128 [2024-11-27 06:18:05.979596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.979619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.128 [2024-11-27 06:18:05.979636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.979658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.128 [2024-11-27 06:18:05.979676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.979707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.128 [2024-11-27 06:18:05.979726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.979750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.128 [2024-11-27 06:18:05.979767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.979789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.128 [2024-11-27 06:18:05.979806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:49.128 [2024-11-27 06:18:05.979833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.129 [2024-11-27 06:18:05.979851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.979873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.129 [2024-11-27 06:18:05.979891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.979914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.129 [2024-11-27 06:18:05.979931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.979954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.129 [2024-11-27 06:18:05.979971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.979993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.129 [2024-11-27 06:18:05.980010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.980033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.129 [2024-11-27 06:18:05.980050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.980073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.129 [2024-11-27 06:18:05.980090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.980119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.129 [2024-11-27 06:18:05.980136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.980171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.129 [2024-11-27 06:18:05.980192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.980225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.129 [2024-11-27 06:18:05.980245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.980267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.129 [2024-11-27 06:18:05.980284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.980307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.129 [2024-11-27 06:18:05.980324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.980347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.129 [2024-11-27 06:18:05.980364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.980387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.129 [2024-11-27 06:18:05.980404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.980426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.129 [2024-11-27 06:18:05.980443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.980466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.129 [2024-11-27 06:18:05.980483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.980508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.129 [2024-11-27 06:18:05.980526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.980549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.129 [2024-11-27 06:18:05.980568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.980591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.129 [2024-11-27 06:18:05.980608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.980630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.129 [2024-11-27 06:18:05.980647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.980670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.129 [2024-11-27 06:18:05.980688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.980711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.129 [2024-11-27 06:18:05.980736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.980762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.129 [2024-11-27 06:18:05.980780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.980804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.129 [2024-11-27 06:18:05.980821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.980844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.129 [2024-11-27 06:18:05.980862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.980885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.129 [2024-11-27 06:18:05.980902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.980926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.129 [2024-11-27 06:18:05.980943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.980965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.129 [2024-11-27 06:18:05.980982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.981006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.129 [2024-11-27 06:18:05.981023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.981046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.129 [2024-11-27 06:18:05.981063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.981086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.129 [2024-11-27 06:18:05.981103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.981141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.129 [2024-11-27 06:18:05.981162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.981186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.129 [2024-11-27 06:18:05.981204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.981227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.129 [2024-11-27 06:18:05.981253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.981278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.129 [2024-11-27 06:18:05.981295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.981318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.129 [2024-11-27 06:18:05.981335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.981358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.129 [2024-11-27 06:18:05.981376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:49.129 [2024-11-27 06:18:05.981398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.129 [2024-11-27 06:18:05.981415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.981439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.130 [2024-11-27 06:18:05.981456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.981479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.130 [2024-11-27 06:18:05.981497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.981536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.130 [2024-11-27 06:18:05.981553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.981576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.130 [2024-11-27 06:18:05.981594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.981618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.130 [2024-11-27 06:18:05.981636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.981659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.130 [2024-11-27 06:18:05.981676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.981699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.130 [2024-11-27 06:18:05.981717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.981740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.130 [2024-11-27 06:18:05.981757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.981795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.130 [2024-11-27 06:18:05.981813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.981837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.130 [2024-11-27 06:18:05.981855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.981877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.130 [2024-11-27 06:18:05.981911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.981933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.130 [2024-11-27 06:18:05.981950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.981973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.130 [2024-11-27 06:18:05.981990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.982012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.130 [2024-11-27 06:18:05.982028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.982052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.130 [2024-11-27 06:18:05.982069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.982092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.130 [2024-11-27 06:18:05.982109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.982148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.130 [2024-11-27 06:18:05.982206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.982236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.130 [2024-11-27 06:18:05.982256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.982280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.130 [2024-11-27 06:18:05.982297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.982319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.130 [2024-11-27 06:18:05.982336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.982369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.130 [2024-11-27 06:18:05.982388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.982411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.130 [2024-11-27 06:18:05.982429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.982451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.130 [2024-11-27 06:18:05.982469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.982492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.130 [2024-11-27 06:18:05.982509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.982531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.130 [2024-11-27 06:18:05.982548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.982571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.130 [2024-11-27 06:18:05.982588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.982610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.130 [2024-11-27 06:18:05.982626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.982649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.130 [2024-11-27 06:18:05.982666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.982690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.130 [2024-11-27 06:18:05.982707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.982730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.130 [2024-11-27 06:18:05.982747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.982769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.130 [2024-11-27 06:18:05.982786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.982809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.130 [2024-11-27 06:18:05.982826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.984250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.130 [2024-11-27 06:18:05.984282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.984312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.130 [2024-11-27 06:18:05.984332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.984357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.130 [2024-11-27 06:18:05.984376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.984399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.130 [2024-11-27 06:18:05.984416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.984439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.130 [2024-11-27 06:18:05.984457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.984480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.130 [2024-11-27 06:18:05.984498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:49.130 [2024-11-27 06:18:05.984520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.131 [2024-11-27 06:18:05.984537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:05.984562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.131 [2024-11-27 06:18:05.984591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:05.984641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.131 [2024-11-27 06:18:05.984664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:49.131 9368.78 IOPS, 36.60 MiB/s [2024-11-27T06:18:54.228Z] 9304.70 IOPS, 36.35 MiB/s [2024-11-27T06:18:54.228Z] 9185.27 IOPS, 35.88 MiB/s [2024-11-27T06:18:54.228Z] 9085.83 IOPS, 35.49 MiB/s [2024-11-27T06:18:54.228Z] 8994.31 IOPS, 35.13 MiB/s [2024-11-27T06:18:54.228Z] 8914.14 IOPS, 34.82 MiB/s [2024-11-27T06:18:54.228Z] [2024-11-27 06:18:12.594471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.131 [2024-11-27 06:18:12.594535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.594603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.131 [2024-11-27 06:18:12.594628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.594653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.131 [2024-11-27 06:18:12.594669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.594720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.131 [2024-11-27 06:18:12.594737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.594759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.131 [2024-11-27 06:18:12.594774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.594796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.131 [2024-11-27 06:18:12.594811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.594833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.131 [2024-11-27 06:18:12.594848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.594869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.131 [2024-11-27 06:18:12.594884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.594906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.131 [2024-11-27 06:18:12.594921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.594943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.131 [2024-11-27 06:18:12.594959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.594980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.131 [2024-11-27 06:18:12.594995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.595017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.131 [2024-11-27 06:18:12.595032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.595053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.131 [2024-11-27 06:18:12.595068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.595089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.131 [2024-11-27 06:18:12.595134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.595169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.131 [2024-11-27 06:18:12.595202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.595265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.131 [2024-11-27 06:18:12.595281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.595539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.131 [2024-11-27 06:18:12.595567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.595597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.131 [2024-11-27 06:18:12.595615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.595639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.131 [2024-11-27 06:18:12.595655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.595678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.131 [2024-11-27 06:18:12.595694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.595717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.131 [2024-11-27 06:18:12.595732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.595756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.131 [2024-11-27 06:18:12.595771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.595794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.131 [2024-11-27 06:18:12.595809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.595832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.131 [2024-11-27 06:18:12.595848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.595870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.131 [2024-11-27 06:18:12.595885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.595908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.131 [2024-11-27 06:18:12.595923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.595947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.131 [2024-11-27 06:18:12.595962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.595984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.131 [2024-11-27 06:18:12.596011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:49.131 [2024-11-27 06:18:12.596036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.132 [2024-11-27 06:18:12.596067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.596105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.132 [2024-11-27 06:18:12.596146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.596166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.132 [2024-11-27 06:18:12.596181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.596201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.132 [2024-11-27 06:18:12.596215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.596235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.132 [2024-11-27 06:18:12.596263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.596287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.132 [2024-11-27 06:18:12.596302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.596322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.132 [2024-11-27 06:18:12.596336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.596357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.132 [2024-11-27 06:18:12.596370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.596390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.132 [2024-11-27 06:18:12.596404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.596424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.132 [2024-11-27 06:18:12.596438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.596458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.132 [2024-11-27 06:18:12.596472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.596493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.132 [2024-11-27 06:18:12.596514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.596554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.132 [2024-11-27 06:18:12.596572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.596594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.132 [2024-11-27 06:18:12.596608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.596628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.132 [2024-11-27 06:18:12.596642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.596662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.132 [2024-11-27 06:18:12.596676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.596696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.132 [2024-11-27 06:18:12.596710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.596730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.132 [2024-11-27 06:18:12.596745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.596766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.132 [2024-11-27 06:18:12.596779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.596800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.132 [2024-11-27 06:18:12.596814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.596834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.132 [2024-11-27 06:18:12.596848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.596870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.132 [2024-11-27 06:18:12.596884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.596904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.132 [2024-11-27 06:18:12.596918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.596938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.132 [2024-11-27 06:18:12.596952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.596981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.132 [2024-11-27 06:18:12.596996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.597016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.132 [2024-11-27 06:18:12.597030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.597050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.132 [2024-11-27 06:18:12.597064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.597085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.132 [2024-11-27 06:18:12.597098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.597122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.132 [2024-11-27 06:18:12.597152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.597175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.132 [2024-11-27 06:18:12.597189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.597209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.132 [2024-11-27 06:18:12.597223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.597243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.132 [2024-11-27 06:18:12.597258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.597278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.132 [2024-11-27 06:18:12.597292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.597312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.132 [2024-11-27 06:18:12.597326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.597347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.132 [2024-11-27 06:18:12.597361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.597382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.132 [2024-11-27 06:18:12.597395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.597424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.132 [2024-11-27 06:18:12.597440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.597461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.132 [2024-11-27 06:18:12.597474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.597495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.132 [2024-11-27 06:18:12.597509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:49.132 [2024-11-27 06:18:12.597529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.133 [2024-11-27 06:18:12.597543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.597563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.133 [2024-11-27 06:18:12.597577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.597597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.133 [2024-11-27 06:18:12.597611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.597632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.133 [2024-11-27 06:18:12.597648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.597669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.133 [2024-11-27 06:18:12.597682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.597730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.133 [2024-11-27 06:18:12.597805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.597828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.133 [2024-11-27 06:18:12.597843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.597866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.133 [2024-11-27 06:18:12.597892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.597915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.133 [2024-11-27 06:18:12.597942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.597975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.133 [2024-11-27 06:18:12.598015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.598040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.133 [2024-11-27 06:18:12.598057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.598080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.133 [2024-11-27 06:18:12.598096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.598119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.133 [2024-11-27 06:18:12.598135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.598197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.133 [2024-11-27 06:18:12.598220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.598246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.133 [2024-11-27 06:18:12.598262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.598285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.133 [2024-11-27 06:18:12.598300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.598323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.133 [2024-11-27 06:18:12.598339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.598361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.133 [2024-11-27 06:18:12.598377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.598400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.133 [2024-11-27 06:18:12.598415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.598438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.133 [2024-11-27 06:18:12.598454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.598477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.133 [2024-11-27 06:18:12.598492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.598515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.133 [2024-11-27 06:18:12.598539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.598589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.133 [2024-11-27 06:18:12.598620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.598656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.133 [2024-11-27 06:18:12.598670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.598692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.133 [2024-11-27 06:18:12.598706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.598726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.133 [2024-11-27 06:18:12.598740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.598760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.133 [2024-11-27 06:18:12.598775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.598796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.133 [2024-11-27 06:18:12.598809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.598830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.133 [2024-11-27 06:18:12.598844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.598868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.133 [2024-11-27 06:18:12.598883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.598904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.133 [2024-11-27 06:18:12.598918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.598938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.133 [2024-11-27 06:18:12.598952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.598972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.133 [2024-11-27 06:18:12.598986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.599006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.133 [2024-11-27 06:18:12.599019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.599047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.133 [2024-11-27 06:18:12.599062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.599082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.133 [2024-11-27 06:18:12.599106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.599127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.133 [2024-11-27 06:18:12.599141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.599161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.133 [2024-11-27 06:18:12.599175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.599208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.133 [2024-11-27 06:18:12.599223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:49.133 [2024-11-27 06:18:12.599260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.134 [2024-11-27 06:18:12.599275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:12.599296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.134 [2024-11-27 06:18:12.599352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:12.599391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.134 [2024-11-27 06:18:12.599406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:12.599433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.134 [2024-11-27 06:18:12.599452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:12.599484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.134 [2024-11-27 06:18:12.599505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:12.599535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.134 [2024-11-27 06:18:12.599565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:12.599615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.134 [2024-11-27 06:18:12.599645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:12.599689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.134 [2024-11-27 06:18:12.599712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:12.599742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.134 [2024-11-27 06:18:12.599764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:12.599793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.134 [2024-11-27 06:18:12.599814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:12.599844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.134 [2024-11-27 06:18:12.599864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:12.599894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.134 [2024-11-27 06:18:12.599918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:12.599949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.134 [2024-11-27 06:18:12.599970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:12.599994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.134 [2024-11-27 06:18:12.600016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:12.600039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.134 [2024-11-27 06:18:12.600054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:12.600077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.134 [2024-11-27 06:18:12.600092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:12.600115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.134 [2024-11-27 06:18:12.600130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:12.600153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.134 [2024-11-27 06:18:12.600168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:12.600190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.134 [2024-11-27 06:18:12.600224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:12.600249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.134 [2024-11-27 06:18:12.600273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:12.600297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.134 [2024-11-27 06:18:12.600313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:12.600336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.134 [2024-11-27 06:18:12.600357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.134 8804.67 IOPS, 34.39 MiB/s [2024-11-27T06:18:54.231Z] 8290.81 IOPS, 32.39 MiB/s [2024-11-27T06:18:54.231Z] 8346.18 IOPS, 32.60 MiB/s [2024-11-27T06:18:54.231Z] 8391.83 IOPS, 32.78 MiB/s [2024-11-27T06:18:54.231Z] 8431.00 IOPS, 32.93 MiB/s [2024-11-27T06:18:54.231Z] 8474.25 IOPS, 33.10 MiB/s [2024-11-27T06:18:54.231Z] 8509.57 IOPS, 33.24 MiB/s [2024-11-27T06:18:54.231Z] 8530.77 IOPS, 33.32 MiB/s [2024-11-27T06:18:54.231Z] [2024-11-27 06:18:19.870318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.134 [2024-11-27 06:18:19.870380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:19.870433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.134 [2024-11-27 06:18:19.870457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:19.870488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.134 [2024-11-27 06:18:19.870522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:19.870545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.134 [2024-11-27 06:18:19.870562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:19.870584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.134 [2024-11-27 06:18:19.870600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:19.870622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.134 [2024-11-27 06:18:19.870639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:19.870661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.134 [2024-11-27 06:18:19.870679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:19.870701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.134 [2024-11-27 06:18:19.870718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:19.870740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.134 [2024-11-27 06:18:19.870757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:19.870808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.134 [2024-11-27 06:18:19.870827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:19.870849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.134 [2024-11-27 06:18:19.870865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:19.870887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.134 [2024-11-27 06:18:19.870903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:19.870924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.134 [2024-11-27 06:18:19.870940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:19.870961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.134 [2024-11-27 06:18:19.870977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:19.870998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.134 [2024-11-27 06:18:19.871013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:49.134 [2024-11-27 06:18:19.871034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.134 [2024-11-27 06:18:19.871051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.871093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.871114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.871156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.871178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.871201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.871219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.871241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.871258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.871279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.871295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.871341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.871361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.871383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.871399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.871421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.871436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.871458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.135 [2024-11-27 06:18:19.871474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.871496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.135 [2024-11-27 06:18:19.871512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.871534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.135 [2024-11-27 06:18:19.871550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.871572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.135 [2024-11-27 06:18:19.871588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.871610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.135 [2024-11-27 06:18:19.871626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.871647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.135 [2024-11-27 06:18:19.871663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.871685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.135 [2024-11-27 06:18:19.871701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.871723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.135 [2024-11-27 06:18:19.871739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.871761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.871778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.871801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.871828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.871852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.871870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.871891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.871908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.871930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.871946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.871968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.871985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.872006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.872023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.872045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.872062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.872083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.872100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.872121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.872153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.872178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.872195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.872217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.872234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.872256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.872273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.872295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.872321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.872345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.872363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.872385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.872402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.872447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.872469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.872495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.872514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.872535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.872552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.872574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.872590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.872612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.872628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.872650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.872666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.872688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.135 [2024-11-27 06:18:19.872704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:49.135 [2024-11-27 06:18:19.872726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.136 [2024-11-27 06:18:19.872743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.872764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.136 [2024-11-27 06:18:19.872781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.872803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.136 [2024-11-27 06:18:19.872820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.872852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.136 [2024-11-27 06:18:19.872870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.872891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.136 [2024-11-27 06:18:19.872908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.872929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.136 [2024-11-27 06:18:19.872946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.872967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.136 [2024-11-27 06:18:19.872984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.136 [2024-11-27 06:18:19.873022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.136 [2024-11-27 06:18:19.873062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.136 [2024-11-27 06:18:19.873100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.136 [2024-11-27 06:18:19.873155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.136 [2024-11-27 06:18:19.873196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.136 [2024-11-27 06:18:19.873234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.136 [2024-11-27 06:18:19.873273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.136 [2024-11-27 06:18:19.873311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.136 [2024-11-27 06:18:19.873361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.136 [2024-11-27 06:18:19.873400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.136 [2024-11-27 06:18:19.873439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.136 [2024-11-27 06:18:19.873476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.136 [2024-11-27 06:18:19.873516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.136 [2024-11-27 06:18:19.873554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.136 [2024-11-27 06:18:19.873593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.136 [2024-11-27 06:18:19.873632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.136 [2024-11-27 06:18:19.873670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.136 [2024-11-27 06:18:19.873709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.136 [2024-11-27 06:18:19.873748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.136 [2024-11-27 06:18:19.873788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.136 [2024-11-27 06:18:19.873836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.136 [2024-11-27 06:18:19.873876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.136 [2024-11-27 06:18:19.873915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.136 [2024-11-27 06:18:19.873953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.873975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.136 [2024-11-27 06:18:19.873991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.874013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.136 [2024-11-27 06:18:19.874030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.874070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.136 [2024-11-27 06:18:19.874091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:49.136 [2024-11-27 06:18:19.874113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.137 [2024-11-27 06:18:19.874144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.874188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.137 [2024-11-27 06:18:19.874207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.874229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.137 [2024-11-27 06:18:19.874246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.874267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.137 [2024-11-27 06:18:19.874284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.874305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.137 [2024-11-27 06:18:19.874322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.874343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.137 [2024-11-27 06:18:19.874370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.874395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.137 [2024-11-27 06:18:19.874422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.874446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.137 [2024-11-27 06:18:19.874463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.874485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.137 [2024-11-27 06:18:19.874502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.874524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.137 [2024-11-27 06:18:19.874540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.874562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.137 [2024-11-27 06:18:19.874578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.874600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.137 [2024-11-27 06:18:19.874616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.874638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.137 [2024-11-27 06:18:19.874655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.874677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.137 [2024-11-27 06:18:19.874694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.874716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.137 [2024-11-27 06:18:19.874732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.874754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.137 [2024-11-27 06:18:19.874771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.874793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.137 [2024-11-27 06:18:19.874809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.874831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.137 [2024-11-27 06:18:19.874847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.874878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.137 [2024-11-27 06:18:19.874896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.874917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.137 [2024-11-27 06:18:19.874934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.874956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.137 [2024-11-27 06:18:19.874972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.874994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.137 [2024-11-27 06:18:19.875010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.875032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.137 [2024-11-27 06:18:19.875056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.875079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.137 [2024-11-27 06:18:19.875104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.875140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.137 [2024-11-27 06:18:19.875160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.875184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.137 [2024-11-27 06:18:19.875201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.875222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.137 [2024-11-27 06:18:19.875239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.875260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.137 [2024-11-27 06:18:19.875276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.875298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.137 [2024-11-27 06:18:19.875314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.875336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.137 [2024-11-27 06:18:19.875352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.875988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.137 [2024-11-27 06:18:19.876016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.876049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.137 [2024-11-27 06:18:19.876068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.876097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.137 [2024-11-27 06:18:19.876114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.876158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.137 [2024-11-27 06:18:19.876179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.876206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.137 [2024-11-27 06:18:19.876224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.876252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.137 [2024-11-27 06:18:19.876269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.876296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.137 [2024-11-27 06:18:19.876313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.876340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.137 [2024-11-27 06:18:19.876358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:49.137 [2024-11-27 06:18:19.876404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.137 [2024-11-27 06:18:19.876432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:49.137 8247.52 IOPS, 32.22 MiB/s [2024-11-27T06:18:54.234Z] 7903.88 IOPS, 30.87 MiB/s [2024-11-27T06:18:54.234Z] 7587.72 IOPS, 29.64 MiB/s [2024-11-27T06:18:54.235Z] 7295.88 IOPS, 28.50 MiB/s [2024-11-27T06:18:54.235Z] 7025.67 IOPS, 27.44 MiB/s [2024-11-27T06:18:54.235Z] 6774.75 IOPS, 26.46 MiB/s [2024-11-27T06:18:54.235Z] 6541.14 IOPS, 25.55 MiB/s [2024-11-27T06:18:54.235Z] 6564.13 IOPS, 25.64 MiB/s [2024-11-27T06:18:54.235Z] 6657.94 IOPS, 26.01 MiB/s [2024-11-27T06:18:54.235Z] 6716.62 IOPS, 26.24 MiB/s [2024-11-27T06:18:54.235Z] 6761.33 IOPS, 26.41 MiB/s [2024-11-27T06:18:54.235Z] 6818.71 IOPS, 26.64 MiB/s [2024-11-27T06:18:54.235Z] 6876.00 IOPS, 26.86 MiB/s [2024-11-27T06:18:54.235Z] [2024-11-27 06:18:33.212334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.138 [2024-11-27 06:18:33.212398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.212483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:130152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.138 [2024-11-27 06:18:33.212507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.212559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:130160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.138 [2024-11-27 06:18:33.212578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.212602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:130168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.138 [2024-11-27 06:18:33.212618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.212641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.138 [2024-11-27 06:18:33.212657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.212679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:130184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.138 [2024-11-27 06:18:33.212696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.212717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:130192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.138 [2024-11-27 06:18:33.212734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.212756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:130200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.138 [2024-11-27 06:18:33.212772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.212794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.138 [2024-11-27 06:18:33.212810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.212832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.138 [2024-11-27 06:18:33.212848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.212870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:129584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.138 [2024-11-27 06:18:33.212886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.212909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:129592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.138 [2024-11-27 06:18:33.212925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.212947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.138 [2024-11-27 06:18:33.212964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.212985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.138 [2024-11-27 06:18:33.213002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.213035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.138 [2024-11-27 06:18:33.213054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.213077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:129624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.138 [2024-11-27 06:18:33.213093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.213116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.138 [2024-11-27 06:18:33.213133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.213177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.138 [2024-11-27 06:18:33.213198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.213221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.138 [2024-11-27 06:18:33.213238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.213261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:129656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.138 [2024-11-27 06:18:33.213278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.213300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.138 [2024-11-27 06:18:33.213316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.213339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.138 [2024-11-27 06:18:33.213356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.213378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.138 [2024-11-27 06:18:33.213395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.213417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:129688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.138 [2024-11-27 06:18:33.213434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.213456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.138 [2024-11-27 06:18:33.213473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.213511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.138 [2024-11-27 06:18:33.213529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.213552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.138 [2024-11-27 06:18:33.213579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.213604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:129720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.138 [2024-11-27 06:18:33.213621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.213644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.138 [2024-11-27 06:18:33.213661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.213685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:129736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.138 [2024-11-27 06:18:33.213702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.213725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:129744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.138 [2024-11-27 06:18:33.213742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.213766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:129752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.138 [2024-11-27 06:18:33.213783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.213849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:130208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.138 [2024-11-27 06:18:33.213889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.213910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:130216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.138 [2024-11-27 06:18:33.213926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.213943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:130224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.138 [2024-11-27 06:18:33.213958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.213974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:130232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.138 [2024-11-27 06:18:33.213989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.138 [2024-11-27 06:18:33.214006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:130240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.138 [2024-11-27 06:18:33.214021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:130248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.139 [2024-11-27 06:18:33.214052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.139 [2024-11-27 06:18:33.214083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:130264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.139 [2024-11-27 06:18:33.214127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.214203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:129768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.214240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.214272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.214322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.214356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:129800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.214388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:129808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.214421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.214454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.214502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.214536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.214569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:129848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.214610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:129856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.214644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.214676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.214708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.214741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:130272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.139 [2024-11-27 06:18:33.214773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.139 [2024-11-27 06:18:33.214806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:130288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.139 [2024-11-27 06:18:33.214837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:130296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.139 [2024-11-27 06:18:33.214869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.139 [2024-11-27 06:18:33.214901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:130312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.139 [2024-11-27 06:18:33.214933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:130320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.139 [2024-11-27 06:18:33.214968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.214985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:130328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.139 [2024-11-27 06:18:33.215001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.215026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:129888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.215042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.215060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.215076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.215093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.215109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.215126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.215141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.215170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.215189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.215206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:129928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.215222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.215239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.215254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.215272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.215288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.215313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.215328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.215345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:129960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.215360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.215377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.215392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.215409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.215425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.139 [2024-11-27 06:18:33.215442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.139 [2024-11-27 06:18:33.215469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.215488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.140 [2024-11-27 06:18:33.215506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.215523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.140 [2024-11-27 06:18:33.215538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.215554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.140 [2024-11-27 06:18:33.215570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.215587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:130336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.140 [2024-11-27 06:18:33.215602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.215620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:130344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.140 [2024-11-27 06:18:33.215636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.215652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:130352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.140 [2024-11-27 06:18:33.215668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.215686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:130360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.140 [2024-11-27 06:18:33.215701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.215717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.140 [2024-11-27 06:18:33.215732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.215749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:130376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.140 [2024-11-27 06:18:33.215765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.215781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:130384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.140 [2024-11-27 06:18:33.215797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.215825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:130392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.140 [2024-11-27 06:18:33.215841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.215858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:130400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.140 [2024-11-27 06:18:33.215874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.215899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:130408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.140 [2024-11-27 06:18:33.215915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.215932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:130416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.140 [2024-11-27 06:18:33.215947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.215970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.140 [2024-11-27 06:18:33.215985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.216002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:130432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.140 [2024-11-27 06:18:33.216017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.216034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:130440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.140 [2024-11-27 06:18:33.216049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.216066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:130448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.140 [2024-11-27 06:18:33.216081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.216098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:130456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.140 [2024-11-27 06:18:33.216114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.216144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.140 [2024-11-27 06:18:33.216162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.216179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:130472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.140 [2024-11-27 06:18:33.216194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.216211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:130480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.140 [2024-11-27 06:18:33.216226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.216243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:130488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.140 [2024-11-27 06:18:33.216258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.216275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:130496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.140 [2024-11-27 06:18:33.216290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.216307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:130504 len:8 SGL DATA BLOCK O 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:49.140 FFSET 0x0 len:0x1000 00:23:49.140 [2024-11-27 06:18:33.216330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.216349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:130512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.140 [2024-11-27 06:18:33.216364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.216387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:130520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.140 [2024-11-27 06:18:33.216403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.216420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:130016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.140 [2024-11-27 06:18:33.216436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.216453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.140 [2024-11-27 06:18:33.216468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.216485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.140 [2024-11-27 06:18:33.216500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.216517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.140 [2024-11-27 06:18:33.216533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.216550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.140 [2024-11-27 06:18:33.216566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.216583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.140 [2024-11-27 06:18:33.216598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.216616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.140 [2024-11-27 06:18:33.216631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.140 [2024-11-27 06:18:33.216648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.140 [2024-11-27 06:18:33.216670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.141 [2024-11-27 06:18:33.216688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.141 [2024-11-27 06:18:33.216704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.141 [2024-11-27 06:18:33.216720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.141 [2024-11-27 06:18:33.216736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.141 [2024-11-27 06:18:33.216752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.141 [2024-11-27 06:18:33.216776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.141 [2024-11-27 06:18:33.216794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c310 is same with the state(6) to be set 00:23:49.141 [2024-11-27 06:18:33.216813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.141 [2024-11-27 06:18:33.216826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.141 [2024-11-27 06:18:33.216838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130104 len:8 PRP1 0x0 PRP2 0x0 00:23:49.141 [2024-11-27 06:18:33.216853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.141 [2024-11-27 06:18:33.216870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.141 [2024-11-27 06:18:33.216882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.141 [2024-11-27 06:18:33.216894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130112 len:8 PRP1 0x0 PRP2 0x0 00:23:49.141 [2024-11-27 06:18:33.216917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.141 [2024-11-27 06:18:33.216934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.141 [2024-11-27 06:18:33.216946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.141 [2024-11-27 06:18:33.216958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130120 len:8 PRP1 0x0 PRP2 0x0 00:23:49.141 [2024-11-27 06:18:33.216974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.141 [2024-11-27 06:18:33.216994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.141 [2024-11-27 06:18:33.217006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.141 [2024-11-27 06:18:33.217018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130128 len:8 PRP1 0x0 PRP2 0x0 00:23:49.141 [2024-11-27 06:18:33.217032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.141 [2024-11-27 06:18:33.217048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.141 [2024-11-27 06:18:33.217059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.141 [2024-11-27 06:18:33.217071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130136 len:8 PRP1 0x0 PRP2 0x0 00:23:49.141 [2024-11-27 06:18:33.217085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.141 [2024-11-27 06:18:33.217100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.141 [2024-11-27 06:18:33.217112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.141 [2024-11-27 06:18:33.217123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130528 len:8 PRP1 0x0 PRP2 0x0 00:23:49.141 [2024-11-27 06:18:33.217156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.141 [2024-11-27 06:18:33.217178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.141 [2024-11-27 06:18:33.217191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.141 [2024-11-27 06:18:33.217203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130536 len:8 PRP1 0x0 PRP2 0x0 00:23:49.141 [2024-11-27 06:18:33.217217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.141 [2024-11-27 06:18:33.217242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.141 [2024-11-27 06:18:33.217255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.141 [2024-11-27 06:18:33.217266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130544 len:8 PRP1 0x0 PRP2 0x0 00:23:49.141 [2024-11-27 06:18:33.217281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.141 [2024-11-27 06:18:33.217296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.141 [2024-11-27 06:18:33.217307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.141 [2024-11-27 06:18:33.217319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130552 len:8 PRP1 0x0 PRP2 0x0 00:23:49.141 [2024-11-27 06:18:33.217333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.141 [2024-11-27 06:18:33.217354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.141 [2024-11-27 06:18:33.217365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.141 [2024-11-27 06:18:33.217377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130560 len:8 PRP1 0x0 PRP2 0x0 00:23:49.141 [2024-11-27 06:18:33.217397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.141 [2024-11-27 06:18:33.217413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.141 [2024-11-27 06:18:33.217425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.141 [2024-11-27 06:18:33.217437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130568 len:8 PRP1 0x0 PRP2 0x0 00:23:49.141 [2024-11-27 06:18:33.217451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.141 [2024-11-27 06:18:33.217466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.141 [2024-11-27 06:18:33.217478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.141 [2024-11-27 06:18:33.217490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130576 len:8 PRP1 0x0 PRP2 0x0 00:23:49.141 [2024-11-27 06:18:33.217521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.141 [2024-11-27 06:18:33.217537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.141 [2024-11-27 06:18:33.217549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.141 [2024-11-27 06:18:33.217561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130584 len:8 PRP1 0x0 PRP2 0x0 00:23:49.141 [2024-11-27 06:18:33.217576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.141 [2024-11-27 06:18:33.217759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.141 [2024-11-27 06:18:33.217787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.141 [2024-11-27 06:18:33.217806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.141 [2024-11-27 06:18:33.217822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.141 [2024-11-27 06:18:33.217839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.141 [2024-11-27 06:18:33.217880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.141 [2024-11-27 06:18:33.217898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.141 [2024-11-27 06:18:33.217913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.141 [2024-11-27 06:18:33.217929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.141 [2024-11-27 06:18:33.217944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.141 [2024-11-27 06:18:33.217974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7d1e0 is same with the state(6) to be set 00:23:49.141 [2024-11-27 06:18:33.219127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:49.141 [2024-11-27 06:18:33.219184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a7d1e0 (9): Bad file descriptor 00:23:49.141 [2024-11-27 06:18:33.219615] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.141 [2024-11-27 06:18:33.219650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7d1e0 with addr=10.0.0.3, port=4421 00:23:49.141 [2024-11-27 06:18:33.219670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7d1e0 is same with the state(6) to be set 00:23:49.141 [2024-11-27 06:18:33.219707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a7d1e0 (9): Bad file descriptor 00:23:49.141 [2024-11-27 06:18:33.219745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:49.141 [2024-11-27 06:18:33.219764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:49.141 [2024-11-27 06:18:33.219781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:49.141 [2024-11-27 06:18:33.219797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:49.141 [2024-11-27 06:18:33.219813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:49.141 6929.25 IOPS, 27.07 MiB/s [2024-11-27T06:18:54.238Z] 6978.30 IOPS, 27.26 MiB/s [2024-11-27T06:18:54.238Z] 7016.66 IOPS, 27.41 MiB/s [2024-11-27T06:18:54.238Z] 7053.67 IOPS, 27.55 MiB/s [2024-11-27T06:18:54.238Z] 7088.93 IOPS, 27.69 MiB/s [2024-11-27T06:18:54.238Z] 7109.20 IOPS, 27.77 MiB/s [2024-11-27T06:18:54.238Z] 7150.60 IOPS, 27.93 MiB/s [2024-11-27T06:18:54.238Z] 7197.33 IOPS, 28.11 MiB/s [2024-11-27T06:18:54.238Z] 7239.93 IOPS, 28.28 MiB/s [2024-11-27T06:18:54.238Z] 7283.13 IOPS, 28.45 MiB/s [2024-11-27T06:18:54.238Z] [2024-11-27 06:18:43.284871] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:49.141 7324.63 IOPS, 28.61 MiB/s [2024-11-27T06:18:54.238Z] 7364.11 IOPS, 28.77 MiB/s [2024-11-27T06:18:54.238Z] 7403.60 IOPS, 28.92 MiB/s [2024-11-27T06:18:54.238Z] 7443.04 IOPS, 29.07 MiB/s [2024-11-27T06:18:54.238Z] 7469.06 IOPS, 29.18 MiB/s [2024-11-27T06:18:54.238Z] 7498.45 IOPS, 29.29 MiB/s [2024-11-27T06:18:54.238Z] 7525.69 IOPS, 29.40 MiB/s [2024-11-27T06:18:54.239Z] 7547.32 IOPS, 29.48 MiB/s [2024-11-27T06:18:54.239Z] 7568.52 IOPS, 29.56 MiB/s [2024-11-27T06:18:54.239Z] 7589.05 IOPS, 29.64 MiB/s [2024-11-27T06:18:54.239Z] Received shutdown signal, test time was about 55.780899 seconds 00:23:49.142 00:23:49.142 Latency(us) 00:23:49.142 [2024-11-27T06:18:54.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.142 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:49.142 Verification LBA range: start 0x0 length 0x4000 00:23:49.142 Nvme0n1 : 55.78 7601.10 29.69 0.00 0.00 16809.06 636.74 7046430.72 00:23:49.142 [2024-11-27T06:18:54.239Z] =================================================================================================================== 00:23:49.142 [2024-11-27T06:18:54.239Z] Total : 7601.10 29.69 0.00 0.00 16809.06 636.74 7046430.72 00:23:49.142 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:23:49.142 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:49.142 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:23:49.142 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:49.142 06:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:23:49.142 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:49.142 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:23:49.142 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:49.142 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:49.142 rmmod nvme_tcp 00:23:49.142 rmmod nvme_fabrics 00:23:49.142 rmmod nvme_keyring 00:23:49.142 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:49.142 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:23:49.142 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:23:49.142 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 81051 ']' 00:23:49.142 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 81051 00:23:49.142 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 81051 ']' 00:23:49.142 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 81051 00:23:49.142 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:23:49.142 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.142 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81051 00:23:49.142 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:49.142 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:49.142 killing process with pid 81051 00:23:49.142 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81051' 00:23:49.142 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 81051 00:23:49.142 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 81051 00:23:49.400 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:49.400 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:49.400 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:49.400 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:23:49.400 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:23:49.400 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:23:49.400 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:49.401 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:49.401 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:49.401 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:49.401 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:49.401 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:49.401 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:49.401 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:49.401 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:49.401 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:49.401 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:49.401 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:49.659 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:49.659 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:49.659 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:49.659 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:49.659 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:49.659 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.659 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.659 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.659 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:23:49.659 00:23:49.659 real 1m1.835s 00:23:49.659 user 2m49.280s 00:23:49.659 sys 0m20.262s 00:23:49.659 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:49.659 ************************************ 00:23:49.659 END TEST nvmf_host_multipath 00:23:49.659 06:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:49.659 ************************************ 00:23:49.659 06:18:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:23:49.659 06:18:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:49.659 06:18:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:49.659 06:18:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.659 ************************************ 00:23:49.659 START TEST nvmf_timeout 00:23:49.659 ************************************ 00:23:49.659 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:23:49.918 * Looking for test storage... 00:23:49.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:49.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.918 --rc genhtml_branch_coverage=1 00:23:49.918 --rc genhtml_function_coverage=1 00:23:49.918 --rc genhtml_legend=1 00:23:49.918 --rc geninfo_all_blocks=1 00:23:49.918 --rc geninfo_unexecuted_blocks=1 00:23:49.918 00:23:49.918 ' 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:49.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.918 --rc genhtml_branch_coverage=1 00:23:49.918 --rc genhtml_function_coverage=1 00:23:49.918 --rc genhtml_legend=1 00:23:49.918 --rc geninfo_all_blocks=1 00:23:49.918 --rc geninfo_unexecuted_blocks=1 00:23:49.918 00:23:49.918 ' 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:49.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.918 --rc genhtml_branch_coverage=1 00:23:49.918 --rc genhtml_function_coverage=1 00:23:49.918 --rc genhtml_legend=1 00:23:49.918 --rc geninfo_all_blocks=1 00:23:49.918 --rc geninfo_unexecuted_blocks=1 00:23:49.918 00:23:49.918 ' 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:49.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.918 --rc genhtml_branch_coverage=1 00:23:49.918 --rc genhtml_function_coverage=1 00:23:49.918 --rc genhtml_legend=1 00:23:49.918 --rc geninfo_all_blocks=1 00:23:49.918 --rc geninfo_unexecuted_blocks=1 00:23:49.918 00:23:49.918 ' 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:23:49.918 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:49.919 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:49.919 Cannot find device "nvmf_init_br" 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:49.919 Cannot find device "nvmf_init_br2" 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:49.919 Cannot find device "nvmf_tgt_br" 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:49.919 Cannot find device "nvmf_tgt_br2" 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:49.919 Cannot find device "nvmf_init_br" 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:49.919 Cannot find device "nvmf_init_br2" 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:49.919 Cannot find device "nvmf_tgt_br" 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:49.919 Cannot find device "nvmf_tgt_br2" 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:23:49.919 06:18:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:49.919 Cannot find device "nvmf_br" 00:23:49.919 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:23:49.919 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:50.177 Cannot find device "nvmf_init_if" 00:23:50.177 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:23:50.177 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:50.177 Cannot find device "nvmf_init_if2" 00:23:50.177 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:23:50.177 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:50.177 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:50.177 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:23:50.177 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:50.177 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:50.178 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:50.437 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:50.437 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:23:50.437 00:23:50.437 --- 10.0.0.3 ping statistics --- 00:23:50.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.437 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:50.437 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:50.437 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:23:50.437 00:23:50.437 --- 10.0.0.4 ping statistics --- 00:23:50.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.437 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:50.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:50.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:23:50.437 00:23:50.437 --- 10.0.0.1 ping statistics --- 00:23:50.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.437 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:50.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:50.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:23:50.437 00:23:50.437 --- 10.0.0.2 ping statistics --- 00:23:50.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.437 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=82264 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 82264 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82264 ']' 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.437 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:50.437 [2024-11-27 06:18:55.388032] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:23:50.437 [2024-11-27 06:18:55.388109] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.696 [2024-11-27 06:18:55.534664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:50.696 [2024-11-27 06:18:55.592170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.696 [2024-11-27 06:18:55.592439] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.696 [2024-11-27 06:18:55.592521] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.696 [2024-11-27 06:18:55.592625] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.696 [2024-11-27 06:18:55.592693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.696 [2024-11-27 06:18:55.593957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.696 [2024-11-27 06:18:55.593969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.696 [2024-11-27 06:18:55.652363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:50.696 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:50.696 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:50.696 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:50.696 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:50.696 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:50.696 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.696 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:50.696 06:18:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:50.954 [2024-11-27 06:18:55.999141] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.954 06:18:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:51.520 Malloc0 00:23:51.520 06:18:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:51.778 06:18:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:52.036 06:18:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:52.294 [2024-11-27 06:18:57.148254] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:52.294 06:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82307 00:23:52.294 06:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:52.294 06:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82307 /var/tmp/bdevperf.sock 00:23:52.294 06:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82307 ']' 00:23:52.294 06:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:52.294 06:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:52.294 06:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:52.294 06:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.294 06:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:52.294 [2024-11-27 06:18:57.229159] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:23:52.294 [2024-11-27 06:18:57.229268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82307 ] 00:23:52.294 [2024-11-27 06:18:57.379259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.553 [2024-11-27 06:18:57.443819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.553 [2024-11-27 06:18:57.502664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:53.144 06:18:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.144 06:18:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:53.144 06:18:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:53.403 06:18:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:53.970 NVMe0n1 00:23:53.970 06:18:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82325 00:23:53.970 06:18:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:53.970 06:18:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:23:53.970 Running I/O for 10 seconds... 00:23:54.909 06:18:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:55.171 6933.00 IOPS, 27.08 MiB/s [2024-11-27T06:19:00.268Z] [2024-11-27 06:19:00.132736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.132801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.132812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.132821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.132829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.132837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.132845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.132854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.132862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.132870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.132878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.132886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.132909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.132917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.132925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.132933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.132957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.132965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.132989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.132997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.133005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.132992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.171 [2024-11-27 06:19:00.133015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.133023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.133042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with t[2024-11-27 06:19:00.133041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:23:55.171 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.171 [2024-11-27 06:19:00.133051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.133056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-11-27 06:19:00.133060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with tid:0 cdw10:00000000 cdw11:00000000 00:23:55.171 he state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.133083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with t[2024-11-27 06:19:00.133083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:23:55.171 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.171 [2024-11-27 06:19:00.133092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.133095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.171 [2024-11-27 06:19:00.133102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.133105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.171 [2024-11-27 06:19:00.133111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.133117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.171 [2024-11-27 06:19:00.133120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.133127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.171 [2024-11-27 06:19:00.133131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.133137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c14e50 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.133140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.133149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.133158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.133177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.133186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.133208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.133218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.133227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.133236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.133244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.133252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.133261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.133269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.171 [2024-11-27 06:19:00.133284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92b10 is same with the state(6) to be set 00:23:55.172 [2024-11-27 06:19:00.133916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.172 [2024-11-27 06:19:00.133933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.172 [2024-11-27 06:19:00.133952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.172 [2024-11-27 06:19:00.133961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.172 [2024-11-27 06:19:00.133971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.172 [2024-11-27 06:19:00.133980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.172 [2024-11-27 06:19:00.133990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.172 [2024-11-27 06:19:00.133998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.172 [2024-11-27 06:19:00.134009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.172 [2024-11-27 06:19:00.134017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.172 [2024-11-27 06:19:00.134027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.172 [2024-11-27 06:19:00.134035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.172 [2024-11-27 06:19:00.134045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.172 [2024-11-27 06:19:00.134053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.172 [2024-11-27 06:19:00.134063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.172 [2024-11-27 06:19:00.134073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.172 [2024-11-27 06:19:00.134083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.172 [2024-11-27 06:19:00.134091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.172 [2024-11-27 06:19:00.134102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.173 [2024-11-27 06:19:00.134926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.173 [2024-11-27 06:19:00.134935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.134943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.134953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.134961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.134971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.134979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.134989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.134998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.174 [2024-11-27 06:19:00.135691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.174 [2024-11-27 06:19:00.135699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.135710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.175 [2024-11-27 06:19:00.135718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.135729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.175 [2024-11-27 06:19:00.135738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.135748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.175 [2024-11-27 06:19:00.135757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.135772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.175 [2024-11-27 06:19:00.135781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.135792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.175 [2024-11-27 06:19:00.135801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.135812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.175 [2024-11-27 06:19:00.135821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.135831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.175 [2024-11-27 06:19:00.135840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.135850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.175 [2024-11-27 06:19:00.135860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.135871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.175 [2024-11-27 06:19:00.135880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.135890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.175 [2024-11-27 06:19:00.135898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.135909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.175 [2024-11-27 06:19:00.135918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.135928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.175 [2024-11-27 06:19:00.135936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.135947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.175 [2024-11-27 06:19:00.135955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.135965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.175 [2024-11-27 06:19:00.135973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.135983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.175 [2024-11-27 06:19:00.135995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.136005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.175 [2024-11-27 06:19:00.136014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.136024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.175 [2024-11-27 06:19:00.136032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.136042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.175 [2024-11-27 06:19:00.136051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.136061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.175 [2024-11-27 06:19:00.136069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.136084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.175 [2024-11-27 06:19:00.136093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.136103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.175 [2024-11-27 06:19:00.136138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.136149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.175 [2024-11-27 06:19:00.136157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.136178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.175 [2024-11-27 06:19:00.136186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.136197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.175 [2024-11-27 06:19:00.136206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.136216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.175 [2024-11-27 06:19:00.136225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.136235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.175 [2024-11-27 06:19:00.136244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.136254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.175 [2024-11-27 06:19:00.136262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.136273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.175 [2024-11-27 06:19:00.136281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.136308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.175 [2024-11-27 06:19:00.136333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.136343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.175 [2024-11-27 06:19:00.136352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.136363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.175 [2024-11-27 06:19:00.136372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.136383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.175 [2024-11-27 06:19:00.136392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.136402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.175 [2024-11-27 06:19:00.136418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.175 [2024-11-27 06:19:00.136429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.176 [2024-11-27 06:19:00.136438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.176 [2024-11-27 06:19:00.136454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.176 [2024-11-27 06:19:00.136463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.176 [2024-11-27 06:19:00.136478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.176 [2024-11-27 06:19:00.136488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.176 [2024-11-27 06:19:00.136499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.176 [2024-11-27 06:19:00.136512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.176 [2024-11-27 06:19:00.136523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.176 [2024-11-27 06:19:00.136532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.176 [2024-11-27 06:19:00.136543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.176 [2024-11-27 06:19:00.136567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.176 [2024-11-27 06:19:00.136577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74970 is same with the state(6) to be set 00:23:55.176 [2024-11-27 06:19:00.136589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:55.176 [2024-11-27 06:19:00.136597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:55.176 [2024-11-27 06:19:00.136605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65696 len:8 PRP1 0x0 PRP2 0x0 00:23:55.176 [2024-11-27 06:19:00.136613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.176 [2024-11-27 06:19:00.136912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:55.176 [2024-11-27 06:19:00.136936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c14e50 (9): Bad file descriptor 00:23:55.176 [2024-11-27 06:19:00.137061] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.176 [2024-11-27 06:19:00.137082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c14e50 with addr=10.0.0.3, port=4420 00:23:55.176 [2024-11-27 06:19:00.137094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c14e50 is same with the state(6) to be set 00:23:55.176 [2024-11-27 06:19:00.137110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c14e50 (9): Bad file descriptor 00:23:55.176 [2024-11-27 06:19:00.137126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:55.176 [2024-11-27 06:19:00.137134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:55.176 [2024-11-27 06:19:00.137160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:55.176 [2024-11-27 06:19:00.137171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:55.176 [2024-11-27 06:19:00.137180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:55.176 06:19:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:23:57.049 4050.00 IOPS, 15.82 MiB/s [2024-11-27T06:19:02.146Z] 2700.00 IOPS, 10.55 MiB/s [2024-11-27T06:19:02.146Z] [2024-11-27 06:19:02.137537] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.049 [2024-11-27 06:19:02.137613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c14e50 with addr=10.0.0.3, port=4420 00:23:57.049 [2024-11-27 06:19:02.137630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c14e50 is same with the state(6) to be set 00:23:57.049 [2024-11-27 06:19:02.137659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c14e50 (9): Bad file descriptor 00:23:57.049 [2024-11-27 06:19:02.137681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:57.049 [2024-11-27 06:19:02.137691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:57.049 [2024-11-27 06:19:02.137704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:57.049 [2024-11-27 06:19:02.137717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:57.049 [2024-11-27 06:19:02.137729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:57.308 06:19:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:23:57.308 06:19:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:57.308 06:19:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:57.567 06:19:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:23:57.567 06:19:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:23:57.567 06:19:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:57.567 06:19:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:57.825 06:19:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:23:57.825 06:19:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:23:59.018 2025.00 IOPS, 7.91 MiB/s [2024-11-27T06:19:04.373Z] 1620.00 IOPS, 6.33 MiB/s [2024-11-27T06:19:04.374Z] [2024-11-27 06:19:04.137965] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.277 [2024-11-27 06:19:04.138046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c14e50 with addr=10.0.0.3, port=4420 00:23:59.277 [2024-11-27 06:19:04.138063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c14e50 is same with the state(6) to be set 00:23:59.277 [2024-11-27 06:19:04.138089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c14e50 (9): Bad file descriptor 00:23:59.277 [2024-11-27 06:19:04.138111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:59.277 [2024-11-27 06:19:04.138120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:59.277 [2024-11-27 06:19:04.138144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:59.277 [2024-11-27 06:19:04.138211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:59.277 [2024-11-27 06:19:04.138224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:01.152 1350.00 IOPS, 5.27 MiB/s [2024-11-27T06:19:06.249Z] 1157.14 IOPS, 4.52 MiB/s [2024-11-27T06:19:06.249Z] [2024-11-27 06:19:06.138376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:01.152 [2024-11-27 06:19:06.138430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:01.152 [2024-11-27 06:19:06.138442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:01.152 [2024-11-27 06:19:06.138454] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:24:01.152 [2024-11-27 06:19:06.138466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:02.090 1012.50 IOPS, 3.96 MiB/s 00:24:02.090 Latency(us) 00:24:02.090 [2024-11-27T06:19:07.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.090 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:02.090 Verification LBA range: start 0x0 length 0x4000 00:24:02.090 NVMe0n1 : 8.18 990.19 3.87 15.65 0.00 127135.45 4081.11 7046430.72 00:24:02.090 [2024-11-27T06:19:07.187Z] =================================================================================================================== 00:24:02.090 [2024-11-27T06:19:07.187Z] Total : 990.19 3.87 15.65 0.00 127135.45 4081.11 7046430.72 00:24:02.090 { 00:24:02.090 "results": [ 00:24:02.090 { 00:24:02.090 "job": "NVMe0n1", 00:24:02.090 "core_mask": "0x4", 00:24:02.090 "workload": "verify", 00:24:02.090 "status": "finished", 00:24:02.090 "verify_range": { 00:24:02.090 "start": 0, 00:24:02.090 "length": 16384 00:24:02.090 }, 00:24:02.090 "queue_depth": 128, 00:24:02.090 "io_size": 4096, 00:24:02.090 "runtime": 8.180207, 00:24:02.090 "iops": 990.1949913981395, 00:24:02.090 "mibps": 3.8679491851489822, 00:24:02.090 "io_failed": 128, 00:24:02.090 "io_timeout": 0, 00:24:02.090 "avg_latency_us": 127135.45020020331, 00:24:02.090 "min_latency_us": 4081.1054545454544, 00:24:02.090 "max_latency_us": 7046430.72 00:24:02.090 } 00:24:02.090 ], 00:24:02.090 "core_count": 1 00:24:02.090 } 00:24:02.657 06:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:24:02.657 06:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:02.657 06:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:02.915 06:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:02.915 06:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:24:02.915 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:02.915 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:03.482 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:03.482 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82325 00:24:03.482 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82307 00:24:03.482 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82307 ']' 00:24:03.482 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82307 00:24:03.482 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:24:03.482 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.482 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82307 00:24:03.482 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:03.482 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:03.482 killing process with pid 82307 00:24:03.482 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82307' 00:24:03.482 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82307 00:24:03.482 Received shutdown signal, test time was about 9.362835 seconds 00:24:03.482 00:24:03.482 Latency(us) 00:24:03.482 [2024-11-27T06:19:08.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.482 [2024-11-27T06:19:08.579Z] =================================================================================================================== 00:24:03.482 [2024-11-27T06:19:08.579Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:03.482 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82307 00:24:03.741 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:04.000 [2024-11-27 06:19:08.848301] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:04.001 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:04.001 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82452 00:24:04.001 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82452 /var/tmp/bdevperf.sock 00:24:04.001 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82452 ']' 00:24:04.001 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:04.001 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:04.001 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:04.001 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.001 06:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:04.001 [2024-11-27 06:19:08.922328] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:24:04.001 [2024-11-27 06:19:08.922422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82452 ] 00:24:04.001 [2024-11-27 06:19:09.068567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.260 [2024-11-27 06:19:09.122664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:04.260 [2024-11-27 06:19:09.194883] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:04.826 06:19:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.085 06:19:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:24:05.085 06:19:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:05.085 06:19:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:05.652 NVMe0n1 00:24:05.652 06:19:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82477 00:24:05.652 06:19:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:05.652 06:19:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:24:05.652 Running I/O for 10 seconds... 00:24:06.739 06:19:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:06.739 9508.00 IOPS, 37.14 MiB/s [2024-11-27T06:19:11.836Z] [2024-11-27 06:19:11.805465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.739 [2024-11-27 06:19:11.805554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.805588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.739 [2024-11-27 06:19:11.805599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.805610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.739 [2024-11-27 06:19:11.805620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.805630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.739 [2024-11-27 06:19:11.805638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.805649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.739 [2024-11-27 06:19:11.805658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.805668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.739 [2024-11-27 06:19:11.805676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.805687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.739 [2024-11-27 06:19:11.805697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.805707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.739 [2024-11-27 06:19:11.805715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.805725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.739 [2024-11-27 06:19:11.805733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.805743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.739 [2024-11-27 06:19:11.805751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.805762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.739 [2024-11-27 06:19:11.805771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.805780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.739 [2024-11-27 06:19:11.805788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.805798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.739 [2024-11-27 06:19:11.805817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.805827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.739 [2024-11-27 06:19:11.805846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.805873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.739 [2024-11-27 06:19:11.805881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.805891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.739 [2024-11-27 06:19:11.805902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.805912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.739 [2024-11-27 06:19:11.805920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.805934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.739 [2024-11-27 06:19:11.805943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.805953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.739 [2024-11-27 06:19:11.805962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.805974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.739 [2024-11-27 06:19:11.805982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.805992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.739 [2024-11-27 06:19:11.806000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.806009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.739 [2024-11-27 06:19:11.806017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.806027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.739 [2024-11-27 06:19:11.806035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.806045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.739 [2024-11-27 06:19:11.806053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.806063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.739 [2024-11-27 06:19:11.806071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.806080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.739 [2024-11-27 06:19:11.806088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.806098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.739 [2024-11-27 06:19:11.806106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.806116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.739 [2024-11-27 06:19:11.806124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.806133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:92352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.739 [2024-11-27 06:19:11.806141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.806197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.739 [2024-11-27 06:19:11.806207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.806217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.739 [2024-11-27 06:19:11.806226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.739 [2024-11-27 06:19:11.806237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.740 [2024-11-27 06:19:11.806414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.740 [2024-11-27 06:19:11.806433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.740 [2024-11-27 06:19:11.806452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.740 [2024-11-27 06:19:11.806470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.740 [2024-11-27 06:19:11.806490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.740 [2024-11-27 06:19:11.806510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.740 [2024-11-27 06:19:11.806529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.740 [2024-11-27 06:19:11.806548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.740 [2024-11-27 06:19:11.806947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.740 [2024-11-27 06:19:11.806965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.740 [2024-11-27 06:19:11.806983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-11-27 06:19:11.806992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.740 [2024-11-27 06:19:11.807000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-11-27 06:19:11.807018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-11-27 06:19:11.807037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-11-27 06:19:11.807055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-11-27 06:19:11.807073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-11-27 06:19:11.807091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-11-27 06:19:11.807112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-11-27 06:19:11.807132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-11-27 06:19:11.807161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-11-27 06:19:11.807180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-11-27 06:19:11.807198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-11-27 06:19:11.807216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-11-27 06:19:11.807233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-11-27 06:19:11.807251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-11-27 06:19:11.807269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-11-27 06:19:11.807288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-11-27 06:19:11.807305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-11-27 06:19:11.807324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-11-27 06:19:11.807342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-11-27 06:19:11.807360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-11-27 06:19:11.807377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-11-27 06:19:11.807395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-11-27 06:19:11.807414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-11-27 06:19:11.807432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-11-27 06:19:11.807451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-11-27 06:19:11.807468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-11-27 06:19:11.807487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-11-27 06:19:11.807521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-11-27 06:19:11.807539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-11-27 06:19:11.807558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-11-27 06:19:11.807577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-11-27 06:19:11.807595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-11-27 06:19:11.807614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-11-27 06:19:11.807632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-11-27 06:19:11.807650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ba970 is same with the state(6) to be set 00:24:06.741 [2024-11-27 06:19:11.807671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.741 [2024-11-27 06:19:11.807678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.741 [2024-11-27 06:19:11.807686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92120 len:8 PRP1 0x0 PRP2 0x0 00:24:06.741 [2024-11-27 06:19:11.807694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.741 [2024-11-27 06:19:11.807712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.741 [2024-11-27 06:19:11.807720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92720 len:8 PRP1 0x0 PRP2 0x0 00:24:06.741 [2024-11-27 06:19:11.807729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-11-27 06:19:11.807739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.741 [2024-11-27 06:19:11.807762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.741 [2024-11-27 06:19:11.807770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92728 len:8 PRP1 0x0 PRP2 0x0 00:24:06.741 [2024-11-27 06:19:11.807779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-11-27 06:19:11.807788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-11-27 06:19:11.807795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-11-27 06:19:11.807802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92736 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-11-27 06:19:11.807811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-11-27 06:19:11.807819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-11-27 06:19:11.807825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-11-27 06:19:11.807833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92744 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-11-27 06:19:11.807841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-11-27 06:19:11.807849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-11-27 06:19:11.807855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-11-27 06:19:11.807863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92752 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-11-27 06:19:11.807871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-11-27 06:19:11.807879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-11-27 06:19:11.807885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-11-27 06:19:11.807893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92760 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-11-27 06:19:11.807901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-11-27 06:19:11.807909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-11-27 06:19:11.807915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-11-27 06:19:11.807923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92768 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-11-27 06:19:11.807931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-11-27 06:19:11.807939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-11-27 06:19:11.807946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-11-27 06:19:11.807954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92776 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-11-27 06:19:11.807962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-11-27 06:19:11.807971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-11-27 06:19:11.807978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-11-27 06:19:11.807988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92784 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-11-27 06:19:11.808007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-11-27 06:19:11.808016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-11-27 06:19:11.808022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-11-27 06:19:11.808030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92792 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-11-27 06:19:11.808039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-11-27 06:19:11.808047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-11-27 06:19:11.808054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-11-27 06:19:11.808061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92800 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-11-27 06:19:11.808072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-11-27 06:19:11.808080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-11-27 06:19:11.808086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-11-27 06:19:11.808093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92808 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-11-27 06:19:11.808102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-11-27 06:19:11.808110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-11-27 06:19:11.808117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-11-27 06:19:11.808124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92816 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-11-27 06:19:11.808134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-11-27 06:19:11.808143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-11-27 06:19:11.808166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-11-27 06:19:11.808174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92824 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-11-27 06:19:11.808183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-11-27 06:19:11.808194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-11-27 06:19:11.808200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-11-27 06:19:11.808208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92128 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-11-27 06:19:11.808216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-11-27 06:19:11.808225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-11-27 06:19:11.808232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-11-27 06:19:11.808255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92136 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-11-27 06:19:11.808263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-11-27 06:19:11.808272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-11-27 06:19:11.808278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-11-27 06:19:11.808314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92144 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-11-27 06:19:11.808322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-11-27 06:19:11.808332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-11-27 06:19:11.808338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-11-27 06:19:11.808347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92152 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-11-27 06:19:11.808356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-11-27 06:19:11.808365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-11-27 06:19:11.808372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-11-27 06:19:11.808379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92160 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-11-27 06:19:11.808387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-11-27 06:19:11.808411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-11-27 06:19:11.808417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-11-27 06:19:11.808424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92168 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-11-27 06:19:11.808432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-11-27 06:19:11.808440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-11-27 06:19:11.808446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-11-27 06:19:11.808453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92176 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-11-27 06:19:11.808478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-11-27 06:19:11.808487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-11-27 06:19:11.808495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-11-27 06:19:11.808502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92184 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-11-27 06:19:11.808511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-11-27 06:19:11.808898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:06.742 [2024-11-27 06:19:11.808999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75ae50 (9): Bad file descriptor 00:24:06.742 [2024-11-27 06:19:11.809164] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.742 [2024-11-27 06:19:11.809199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75ae50 with addr=10.0.0.3, port=4420 00:24:06.742 [2024-11-27 06:19:11.809211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75ae50 is same with the state(6) to be set 00:24:06.742 [2024-11-27 06:19:11.809230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75ae50 (9): Bad file descriptor 00:24:06.742 [2024-11-27 06:19:11.809246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:06.742 [2024-11-27 06:19:11.809256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:06.742 [2024-11-27 06:19:11.809268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:06.742 [2024-11-27 06:19:11.809278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:06.742 [2024-11-27 06:19:11.809299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:06.742 06:19:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:24:07.939 5738.00 IOPS, 22.41 MiB/s [2024-11-27T06:19:13.036Z] [2024-11-27 06:19:12.809455] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.939 [2024-11-27 06:19:12.809541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75ae50 with addr=10.0.0.3, port=4420 00:24:07.939 [2024-11-27 06:19:12.809558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75ae50 is same with the state(6) to be set 00:24:07.939 [2024-11-27 06:19:12.809585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75ae50 (9): Bad file descriptor 00:24:07.939 [2024-11-27 06:19:12.809606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:07.939 [2024-11-27 06:19:12.809617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:07.939 [2024-11-27 06:19:12.809628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:07.939 [2024-11-27 06:19:12.809639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:07.939 [2024-11-27 06:19:12.809651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:07.939 06:19:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:08.198 [2024-11-27 06:19:13.091771] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:08.198 06:19:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82477 00:24:08.766 3825.33 IOPS, 14.94 MiB/s [2024-11-27T06:19:13.863Z] [2024-11-27 06:19:13.824693] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:10.639 2869.00 IOPS, 11.21 MiB/s [2024-11-27T06:19:16.674Z] 3898.60 IOPS, 15.23 MiB/s [2024-11-27T06:19:18.052Z] 4961.17 IOPS, 19.38 MiB/s [2024-11-27T06:19:18.990Z] 5699.00 IOPS, 22.26 MiB/s [2024-11-27T06:19:19.928Z] 6207.62 IOPS, 24.25 MiB/s [2024-11-27T06:19:20.866Z] 6553.44 IOPS, 25.60 MiB/s [2024-11-27T06:19:20.866Z] 6755.70 IOPS, 26.39 MiB/s 00:24:15.769 Latency(us) 00:24:15.769 [2024-11-27T06:19:20.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.769 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:15.769 Verification LBA range: start 0x0 length 0x4000 00:24:15.769 NVMe0n1 : 10.01 6761.77 26.41 0.00 0.00 18896.79 1131.99 3019898.88 00:24:15.769 [2024-11-27T06:19:20.866Z] =================================================================================================================== 00:24:15.769 [2024-11-27T06:19:20.866Z] Total : 6761.77 26.41 0.00 0.00 18896.79 1131.99 3019898.88 00:24:15.769 { 00:24:15.769 "results": [ 00:24:15.769 { 00:24:15.769 "job": "NVMe0n1", 00:24:15.769 "core_mask": "0x4", 00:24:15.769 "workload": "verify", 00:24:15.769 "status": "finished", 00:24:15.769 "verify_range": { 00:24:15.769 "start": 0, 00:24:15.769 "length": 16384 00:24:15.769 }, 00:24:15.769 "queue_depth": 128, 00:24:15.769 "io_size": 4096, 00:24:15.769 "runtime": 10.009952, 00:24:15.769 "iops": 6761.770685813479, 00:24:15.769 "mibps": 26.4131667414589, 00:24:15.769 "io_failed": 0, 00:24:15.769 "io_timeout": 0, 00:24:15.769 "avg_latency_us": 18896.787718293966, 00:24:15.769 "min_latency_us": 1131.9854545454546, 00:24:15.769 "max_latency_us": 3019898.88 00:24:15.769 } 00:24:15.769 ], 00:24:15.769 "core_count": 1 00:24:15.769 } 00:24:15.769 06:19:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82582 00:24:15.769 06:19:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:15.769 06:19:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:24:15.769 Running I/O for 10 seconds... 00:24:16.707 06:19:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:16.969 7266.00 IOPS, 28.38 MiB/s [2024-11-27T06:19:22.066Z] [2024-11-27 06:19:21.917896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.969 [2024-11-27 06:19:21.917957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.969 [2024-11-27 06:19:21.917980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.969 [2024-11-27 06:19:21.917991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.969 [2024-11-27 06:19:21.918002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.969 [2024-11-27 06:19:21.918010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.969 [2024-11-27 06:19:21.918021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.969 [2024-11-27 06:19:21.918029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.969 [2024-11-27 06:19:21.918039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.969 [2024-11-27 06:19:21.918048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.969 [2024-11-27 06:19:21.918058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.969 [2024-11-27 06:19:21.918066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.969 [2024-11-27 06:19:21.918077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.969 [2024-11-27 06:19:21.918085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.969 [2024-11-27 06:19:21.918096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.969 [2024-11-27 06:19:21.918104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.969 [2024-11-27 06:19:21.918114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.969 [2024-11-27 06:19:21.918122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.969 [2024-11-27 06:19:21.918132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.969 [2024-11-27 06:19:21.918184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.969 [2024-11-27 06:19:21.918206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.969 [2024-11-27 06:19:21.918215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.969 [2024-11-27 06:19:21.918237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.969 [2024-11-27 06:19:21.918247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.969 [2024-11-27 06:19:21.918257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.969 [2024-11-27 06:19:21.918266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.969 [2024-11-27 06:19:21.918284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.969 [2024-11-27 06:19:21.918293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.969 [2024-11-27 06:19:21.918303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.969 [2024-11-27 06:19:21.918312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.969 [2024-11-27 06:19:21.918323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.969 [2024-11-27 06:19:21.918331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.969 [2024-11-27 06:19:21.918341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.969 [2024-11-27 06:19:21.918350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.969 [2024-11-27 06:19:21.918362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.969 [2024-11-27 06:19:21.918372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.969 [2024-11-27 06:19:21.918382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.969 [2024-11-27 06:19:21.918391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.969 [2024-11-27 06:19:21.918401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.969 [2024-11-27 06:19:21.918410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.969 [2024-11-27 06:19:21.918421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.969 [2024-11-27 06:19:21.918430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.918440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.918449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.918459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.918468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.918478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.918496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.918507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.918523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.918533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.918566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.918593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.918613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.918633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.918642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.918653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.918662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.918673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.918682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.918693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.918708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.918719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.918728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.918739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.918748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.918760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.918770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.918781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.918790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.918801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.918811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.918833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.918857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.918867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.918876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.918887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.918896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.918921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.918944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.918954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.918963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.918973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.918982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.918991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.919000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.919009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.919018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.919027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.919036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.919045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.919054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.919064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.919072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.919082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.919090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.919100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.919109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.919121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.919130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.919141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.919150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.919160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.919178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.919188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.919196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.919215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.919226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.919248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.919256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.919267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.919275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.919285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.919294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.919304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.919313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.919323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.919331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.919341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.970 [2024-11-27 06:19:21.919350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.970 [2024-11-27 06:19:21.919359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.919914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.971 [2024-11-27 06:19:21.919932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.971 [2024-11-27 06:19:21.919951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.971 [2024-11-27 06:19:21.919969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.971 [2024-11-27 06:19:21.919988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.919998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.971 [2024-11-27 06:19:21.920008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.920018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.971 [2024-11-27 06:19:21.920027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.920037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.971 [2024-11-27 06:19:21.920046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.920057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.920065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.920075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.920084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.971 [2024-11-27 06:19:21.920094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.971 [2024-11-27 06:19:21.920103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.972 [2024-11-27 06:19:21.920122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.972 [2024-11-27 06:19:21.920153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.972 [2024-11-27 06:19:21.920171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.972 [2024-11-27 06:19:21.920190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.972 [2024-11-27 06:19:21.920208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.972 [2024-11-27 06:19:21.920227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.972 [2024-11-27 06:19:21.920246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.972 [2024-11-27 06:19:21.920264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.972 [2024-11-27 06:19:21.920282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.972 [2024-11-27 06:19:21.920301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.972 [2024-11-27 06:19:21.920320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.972 [2024-11-27 06:19:21.920339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.972 [2024-11-27 06:19:21.920358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.972 [2024-11-27 06:19:21.920377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.972 [2024-11-27 06:19:21.920395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.972 [2024-11-27 06:19:21.920414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.972 [2024-11-27 06:19:21.920433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.972 [2024-11-27 06:19:21.920450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.972 [2024-11-27 06:19:21.920469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.972 [2024-11-27 06:19:21.920488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.972 [2024-11-27 06:19:21.920505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.972 [2024-11-27 06:19:21.920524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.972 [2024-11-27 06:19:21.920542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.972 [2024-11-27 06:19:21.920561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.972 [2024-11-27 06:19:21.920579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.972 [2024-11-27 06:19:21.920610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.972 [2024-11-27 06:19:21.920629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8fd0 is same with the state(6) to be set 00:24:16.972 [2024-11-27 06:19:21.920650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.972 [2024-11-27 06:19:21.920658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.972 [2024-11-27 06:19:21.920665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66416 len:8 PRP1 0x0 PRP2 0x0 00:24:16.972 [2024-11-27 06:19:21.920681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.972 [2024-11-27 06:19:21.920929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:16.972 [2024-11-27 06:19:21.921012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75ae50 (9): Bad file descriptor 00:24:16.972 [2024-11-27 06:19:21.921107] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.972 [2024-11-27 06:19:21.921152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75ae50 with addr=10.0.0.3, port=4420 00:24:16.972 [2024-11-27 06:19:21.921165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75ae50 is same with the state(6) to be set 00:24:16.972 [2024-11-27 06:19:21.921182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75ae50 (9): Bad file descriptor 00:24:16.972 [2024-11-27 06:19:21.921201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:24:16.972 [2024-11-27 06:19:21.921211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:24:16.972 [2024-11-27 06:19:21.921230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:16.972 [2024-11-27 06:19:21.921257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:24:16.972 [2024-11-27 06:19:21.921284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:16.972 06:19:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:24:17.909 4087.50 IOPS, 15.97 MiB/s [2024-11-27T06:19:23.006Z] [2024-11-27 06:19:22.921398] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.909 [2024-11-27 06:19:22.921461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75ae50 with addr=10.0.0.3, port=4420 00:24:17.909 [2024-11-27 06:19:22.921475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75ae50 is same with the state(6) to be set 00:24:17.909 [2024-11-27 06:19:22.921500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75ae50 (9): Bad file descriptor 00:24:17.909 [2024-11-27 06:19:22.921518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:24:17.909 [2024-11-27 06:19:22.921528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:24:17.909 [2024-11-27 06:19:22.921539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:17.910 [2024-11-27 06:19:22.921550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:24:17.910 [2024-11-27 06:19:22.921562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:18.851 2725.00 IOPS, 10.64 MiB/s [2024-11-27T06:19:23.948Z] [2024-11-27 06:19:23.921705] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.851 [2024-11-27 06:19:23.921773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75ae50 with addr=10.0.0.3, port=4420 00:24:18.851 [2024-11-27 06:19:23.921788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75ae50 is same with the state(6) to be set 00:24:18.851 [2024-11-27 06:19:23.921813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75ae50 (9): Bad file descriptor 00:24:18.851 [2024-11-27 06:19:23.921831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:24:18.851 [2024-11-27 06:19:23.921842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:24:18.851 [2024-11-27 06:19:23.921854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:18.851 [2024-11-27 06:19:23.921865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:24:18.851 [2024-11-27 06:19:23.921876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:20.056 2043.75 IOPS, 7.98 MiB/s [2024-11-27T06:19:25.153Z] [2024-11-27 06:19:24.925540] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.056 [2024-11-27 06:19:24.925611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75ae50 with addr=10.0.0.3, port=4420 00:24:20.056 [2024-11-27 06:19:24.925628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75ae50 is same with the state(6) to be set 00:24:20.056 [2024-11-27 06:19:24.925865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75ae50 (9): Bad file descriptor 00:24:20.056 [2024-11-27 06:19:24.926077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:24:20.056 [2024-11-27 06:19:24.926089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:24:20.056 [2024-11-27 06:19:24.926100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:20.056 [2024-11-27 06:19:24.926111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:24:20.056 [2024-11-27 06:19:24.926123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:20.056 06:19:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:20.315 [2024-11-27 06:19:25.240414] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:20.315 06:19:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82582 00:24:20.884 1635.00 IOPS, 6.39 MiB/s [2024-11-27T06:19:25.981Z] [2024-11-27 06:19:25.950000] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:24:22.759 2580.83 IOPS, 10.08 MiB/s [2024-11-27T06:19:29.236Z] 3435.00 IOPS, 13.42 MiB/s [2024-11-27T06:19:29.806Z] 4077.62 IOPS, 15.93 MiB/s [2024-11-27T06:19:31.183Z] 4587.22 IOPS, 17.92 MiB/s [2024-11-27T06:19:31.183Z] 4998.10 IOPS, 19.52 MiB/s 00:24:26.086 Latency(us) 00:24:26.087 [2024-11-27T06:19:31.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.087 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:26.087 Verification LBA range: start 0x0 length 0x4000 00:24:26.087 NVMe0n1 : 10.01 5003.92 19.55 3985.91 0.00 14213.84 1064.96 3019898.88 00:24:26.087 [2024-11-27T06:19:31.184Z] =================================================================================================================== 00:24:26.087 [2024-11-27T06:19:31.184Z] Total : 5003.92 19.55 3985.91 0.00 14213.84 0.00 3019898.88 00:24:26.087 { 00:24:26.087 "results": [ 00:24:26.087 { 00:24:26.087 "job": "NVMe0n1", 00:24:26.087 "core_mask": "0x4", 00:24:26.087 "workload": "verify", 00:24:26.087 "status": "finished", 00:24:26.087 "verify_range": { 00:24:26.087 "start": 0, 00:24:26.087 "length": 16384 00:24:26.087 }, 00:24:26.087 "queue_depth": 128, 00:24:26.087 "io_size": 4096, 00:24:26.087 "runtime": 10.010753, 00:24:26.087 "iops": 5003.919285592203, 00:24:26.087 "mibps": 19.546559709344542, 00:24:26.087 "io_failed": 39902, 00:24:26.087 "io_timeout": 0, 00:24:26.087 "avg_latency_us": 14213.843211733983, 00:24:26.087 "min_latency_us": 1064.96, 00:24:26.087 "max_latency_us": 3019898.88 00:24:26.087 } 00:24:26.087 ], 00:24:26.087 "core_count": 1 00:24:26.087 } 00:24:26.087 06:19:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82452 00:24:26.087 06:19:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82452 ']' 00:24:26.087 06:19:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82452 00:24:26.087 06:19:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:24:26.087 06:19:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.087 06:19:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82452 00:24:26.087 killing process with pid 82452 00:24:26.087 Received shutdown signal, test time was about 10.000000 seconds 00:24:26.087 00:24:26.087 Latency(us) 00:24:26.087 [2024-11-27T06:19:31.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.087 [2024-11-27T06:19:31.184Z] =================================================================================================================== 00:24:26.087 [2024-11-27T06:19:31.184Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:26.087 06:19:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:26.087 06:19:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:26.087 06:19:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82452' 00:24:26.087 06:19:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82452 00:24:26.087 06:19:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82452 00:24:26.087 06:19:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82696 00:24:26.087 06:19:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:24:26.087 06:19:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82696 /var/tmp/bdevperf.sock 00:24:26.087 06:19:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82696 ']' 00:24:26.087 06:19:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:26.087 06:19:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:26.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:26.087 06:19:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:26.087 06:19:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:26.087 06:19:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:26.346 [2024-11-27 06:19:31.233284] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:24:26.346 [2024-11-27 06:19:31.233661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82696 ] 00:24:26.346 [2024-11-27 06:19:31.384402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.605 [2024-11-27 06:19:31.451850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.605 [2024-11-27 06:19:31.542399] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:27.174 06:19:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.174 06:19:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:24:27.174 06:19:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82712 00:24:27.174 06:19:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:24:27.174 06:19:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82696 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:24:27.743 06:19:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:28.001 NVMe0n1 00:24:28.001 06:19:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82754 00:24:28.001 06:19:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:28.001 06:19:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:24:28.001 Running I/O for 10 seconds... 00:24:28.937 06:19:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:29.201 14478.00 IOPS, 56.55 MiB/s [2024-11-27T06:19:34.298Z] [2024-11-27 06:19:34.179539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.179996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.201 [2024-11-27 06:19:34.180408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96ac0 is same with the state(6) to be set 00:24:29.202 [2024-11-27 06:19:34.180913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.202 [2024-11-27 06:19:34.180945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.202 [2024-11-27 06:19:34.180968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.202 [2024-11-27 06:19:34.180979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.202 [2024-11-27 06:19:34.180991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.202 [2024-11-27 06:19:34.181000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.202 [2024-11-27 06:19:34.181011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.202 [2024-11-27 06:19:34.181020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.202 [2024-11-27 06:19:34.181030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.202 [2024-11-27 06:19:34.181039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.202 [2024-11-27 06:19:34.181050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.202 [2024-11-27 06:19:34.181059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.202 [2024-11-27 06:19:34.181069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.202 [2024-11-27 06:19:34.181078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.202 [2024-11-27 06:19:34.181089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.202 [2024-11-27 06:19:34.181098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.202 [2024-11-27 06:19:34.181108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.202 [2024-11-27 06:19:34.181117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.202 [2024-11-27 06:19:34.181128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.202 [2024-11-27 06:19:34.181153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.202 [2024-11-27 06:19:34.181164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.202 [2024-11-27 06:19:34.181173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.202 [2024-11-27 06:19:34.181184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.202 [2024-11-27 06:19:34.181193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.202 [2024-11-27 06:19:34.181204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.202 [2024-11-27 06:19:34.181214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.202 [2024-11-27 06:19:34.181238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.202 [2024-11-27 06:19:34.181248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.202 [2024-11-27 06:19:34.181264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:123088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.202 [2024-11-27 06:19:34.181273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.202 [2024-11-27 06:19:34.181284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.202 [2024-11-27 06:19:34.181293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.202 [2024-11-27 06:19:34.181306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.202 [2024-11-27 06:19:34.181319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.202 [2024-11-27 06:19:34.181331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.202 [2024-11-27 06:19:34.181340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.202 [2024-11-27 06:19:34.181352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:54136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:106432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:118568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:88952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.181980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.181991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.182000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.182011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.182020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.182031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.182040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.182051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.182060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.182071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.182080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.182091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.182108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.182119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.182128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.182201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.182214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.182226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.182235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.182247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.182256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.203 [2024-11-27 06:19:34.182268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.203 [2024-11-27 06:19:34.182277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:111712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.182982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.182991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.183003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.183012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.183024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.183033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.183045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.183054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.183065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.183075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.183086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.183096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.183115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.183124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.183135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.183145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.183174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:127440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.183183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.183205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.183225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.183237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.204 [2024-11-27 06:19:34.183247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.204 [2024-11-27 06:19:34.183264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:125048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:88672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:52608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:58504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.205 [2024-11-27 06:19:34.183919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.183929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70ce20 is same with the state(6) to be set 00:24:29.205 [2024-11-27 06:19:34.183941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.205 [2024-11-27 06:19:34.183949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.205 [2024-11-27 06:19:34.183957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18056 len:8 PRP1 0x0 PRP2 0x0 00:24:29.205 [2024-11-27 06:19:34.183971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.205 [2024-11-27 06:19:34.184394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:29.205 [2024-11-27 06:19:34.184493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x69fe50 (9): Bad file descriptor 00:24:29.205 [2024-11-27 06:19:34.184628] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.206 [2024-11-27 06:19:34.184649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x69fe50 with addr=10.0.0.3, port=4420 00:24:29.206 [2024-11-27 06:19:34.184660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69fe50 is same with the state(6) to be set 00:24:29.206 [2024-11-27 06:19:34.184678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x69fe50 (9): Bad file descriptor 00:24:29.206 [2024-11-27 06:19:34.184711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:24:29.206 [2024-11-27 06:19:34.184724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:24:29.206 [2024-11-27 06:19:34.184736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:29.206 [2024-11-27 06:19:34.184747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:24:29.206 [2024-11-27 06:19:34.184758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:29.206 06:19:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82754 00:24:31.148 7938.50 IOPS, 31.01 MiB/s [2024-11-27T06:19:36.245Z] 5292.33 IOPS, 20.67 MiB/s [2024-11-27T06:19:36.245Z] [2024-11-27 06:19:36.185021] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.148 [2024-11-27 06:19:36.185094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x69fe50 with addr=10.0.0.3, port=4420 00:24:31.148 [2024-11-27 06:19:36.185113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69fe50 is same with the state(6) to be set 00:24:31.148 [2024-11-27 06:19:36.185168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x69fe50 (9): Bad file descriptor 00:24:31.148 [2024-11-27 06:19:36.185193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:24:31.148 [2024-11-27 06:19:36.185205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:24:31.148 [2024-11-27 06:19:36.185217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:31.148 [2024-11-27 06:19:36.185230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:24:31.148 [2024-11-27 06:19:36.185294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:33.023 3969.25 IOPS, 15.50 MiB/s [2024-11-27T06:19:38.379Z] 3175.40 IOPS, 12.40 MiB/s [2024-11-27T06:19:38.379Z] [2024-11-27 06:19:38.185516] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.282 [2024-11-27 06:19:38.185577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x69fe50 with addr=10.0.0.3, port=4420 00:24:33.282 [2024-11-27 06:19:38.185599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69fe50 is same with the state(6) to be set 00:24:33.282 [2024-11-27 06:19:38.185626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x69fe50 (9): Bad file descriptor 00:24:33.282 [2024-11-27 06:19:38.185647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:24:33.282 [2024-11-27 06:19:38.185657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:24:33.282 [2024-11-27 06:19:38.185669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:33.282 [2024-11-27 06:19:38.185681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:24:33.282 [2024-11-27 06:19:38.185693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:35.156 2646.17 IOPS, 10.34 MiB/s [2024-11-27T06:19:40.253Z] 2268.14 IOPS, 8.86 MiB/s [2024-11-27T06:19:40.253Z] [2024-11-27 06:19:40.185775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:35.156 [2024-11-27 06:19:40.185822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:24:35.156 [2024-11-27 06:19:40.185846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:24:35.156 [2024-11-27 06:19:40.185857] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:24:35.156 [2024-11-27 06:19:40.185870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:24:36.351 1984.62 IOPS, 7.75 MiB/s 00:24:36.351 Latency(us) 00:24:36.351 [2024-11-27T06:19:41.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.351 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:24:36.351 NVMe0n1 : 8.10 1959.28 7.65 15.80 0.00 64691.45 7864.32 7015926.69 00:24:36.351 [2024-11-27T06:19:41.448Z] =================================================================================================================== 00:24:36.351 [2024-11-27T06:19:41.448Z] Total : 1959.28 7.65 15.80 0.00 64691.45 7864.32 7015926.69 00:24:36.351 { 00:24:36.351 "results": [ 00:24:36.351 { 00:24:36.351 "job": "NVMe0n1", 00:24:36.351 "core_mask": "0x4", 00:24:36.351 "workload": "randread", 00:24:36.351 "status": "finished", 00:24:36.351 "queue_depth": 128, 00:24:36.351 "io_size": 4096, 00:24:36.351 "runtime": 8.103468, 00:24:36.351 "iops": 1959.2845927200551, 00:24:36.351 "mibps": 7.653455440312715, 00:24:36.351 "io_failed": 128, 00:24:36.351 "io_timeout": 0, 00:24:36.351 "avg_latency_us": 64691.45088886996, 00:24:36.351 "min_latency_us": 7864.32, 00:24:36.351 "max_latency_us": 7015926.69090909 00:24:36.351 } 00:24:36.351 ], 00:24:36.351 "core_count": 1 00:24:36.351 } 00:24:36.351 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:36.351 Attaching 5 probes... 00:24:36.351 1396.791952: reset bdev controller NVMe0 00:24:36.351 1396.981255: reconnect bdev controller NVMe0 00:24:36.351 3397.241699: reconnect delay bdev controller NVMe0 00:24:36.351 3397.283122: reconnect bdev controller NVMe0 00:24:36.351 5397.780473: reconnect delay bdev controller NVMe0 00:24:36.351 5397.820391: reconnect bdev controller NVMe0 00:24:36.351 7398.165976: reconnect delay bdev controller NVMe0 00:24:36.351 7398.204009: reconnect bdev controller NVMe0 00:24:36.351 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:24:36.351 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:24:36.351 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82712 00:24:36.351 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:36.351 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82696 00:24:36.351 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82696 ']' 00:24:36.351 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82696 00:24:36.351 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:24:36.351 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:36.351 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82696 00:24:36.351 killing process with pid 82696 00:24:36.351 Received shutdown signal, test time was about 8.177168 seconds 00:24:36.351 00:24:36.351 Latency(us) 00:24:36.351 [2024-11-27T06:19:41.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.351 [2024-11-27T06:19:41.448Z] =================================================================================================================== 00:24:36.351 [2024-11-27T06:19:41.448Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:36.351 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:36.351 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:36.351 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82696' 00:24:36.351 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82696 00:24:36.351 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82696 00:24:36.351 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:36.919 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:24:36.919 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:24:36.919 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:36.919 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:24:36.919 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:36.919 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:24:36.919 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:36.919 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:36.919 rmmod nvme_tcp 00:24:36.919 rmmod nvme_fabrics 00:24:36.919 rmmod nvme_keyring 00:24:36.919 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:36.919 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:24:36.919 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:24:36.919 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 82264 ']' 00:24:36.919 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 82264 00:24:36.919 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82264 ']' 00:24:36.919 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82264 00:24:36.919 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:24:36.919 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:36.919 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82264 00:24:36.919 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:36.919 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:36.919 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82264' 00:24:36.919 killing process with pid 82264 00:24:36.919 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82264 00:24:36.919 06:19:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82264 00:24:37.177 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:37.177 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:37.177 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:37.177 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:24:37.177 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:37.177 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:24:37.177 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:24:37.177 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:37.177 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:37.178 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:37.178 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:37.178 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:37.178 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:37.178 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:37.178 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:37.178 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:37.436 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:37.436 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:37.436 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:37.436 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:37.436 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:37.436 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:37.436 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:37.436 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.436 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.436 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.436 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:24:37.436 ************************************ 00:24:37.436 END TEST nvmf_timeout 00:24:37.436 ************************************ 00:24:37.436 00:24:37.436 real 0m47.759s 00:24:37.436 user 2m19.713s 00:24:37.436 sys 0m6.094s 00:24:37.436 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:37.436 06:19:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:37.436 06:19:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:24:37.436 06:19:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:24:37.436 ************************************ 00:24:37.436 END TEST nvmf_host 00:24:37.436 ************************************ 00:24:37.436 00:24:37.436 real 5m7.888s 00:24:37.436 user 13m18.772s 00:24:37.436 sys 1m14.114s 00:24:37.436 06:19:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:37.436 06:19:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.436 06:19:42 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:24:37.436 06:19:42 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:24:37.437 ************************************ 00:24:37.437 END TEST nvmf_tcp 00:24:37.437 ************************************ 00:24:37.437 00:24:37.437 real 12m49.009s 00:24:37.437 user 30m42.859s 00:24:37.437 sys 3m17.376s 00:24:37.437 06:19:42 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:37.437 06:19:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:37.696 06:19:42 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:24:37.696 06:19:42 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:24:37.696 06:19:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:37.696 06:19:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:37.696 06:19:42 -- common/autotest_common.sh@10 -- # set +x 00:24:37.696 ************************************ 00:24:37.696 START TEST nvmf_dif 00:24:37.696 ************************************ 00:24:37.696 06:19:42 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:24:37.696 * Looking for test storage... 00:24:37.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:37.696 06:19:42 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:37.696 06:19:42 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:24:37.696 06:19:42 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:37.696 06:19:42 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:24:37.696 06:19:42 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:37.696 06:19:42 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:37.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.696 --rc genhtml_branch_coverage=1 00:24:37.696 --rc genhtml_function_coverage=1 00:24:37.696 --rc genhtml_legend=1 00:24:37.696 --rc geninfo_all_blocks=1 00:24:37.696 --rc geninfo_unexecuted_blocks=1 00:24:37.696 00:24:37.696 ' 00:24:37.696 06:19:42 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:37.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.696 --rc genhtml_branch_coverage=1 00:24:37.696 --rc genhtml_function_coverage=1 00:24:37.696 --rc genhtml_legend=1 00:24:37.696 --rc geninfo_all_blocks=1 00:24:37.696 --rc geninfo_unexecuted_blocks=1 00:24:37.696 00:24:37.696 ' 00:24:37.696 06:19:42 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:37.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.696 --rc genhtml_branch_coverage=1 00:24:37.696 --rc genhtml_function_coverage=1 00:24:37.696 --rc genhtml_legend=1 00:24:37.696 --rc geninfo_all_blocks=1 00:24:37.696 --rc geninfo_unexecuted_blocks=1 00:24:37.696 00:24:37.696 ' 00:24:37.696 06:19:42 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:37.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.696 --rc genhtml_branch_coverage=1 00:24:37.696 --rc genhtml_function_coverage=1 00:24:37.696 --rc genhtml_legend=1 00:24:37.696 --rc geninfo_all_blocks=1 00:24:37.696 --rc geninfo_unexecuted_blocks=1 00:24:37.696 00:24:37.696 ' 00:24:37.696 06:19:42 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:37.696 06:19:42 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:24:37.696 06:19:42 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:37.696 06:19:42 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:37.696 06:19:42 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:37.696 06:19:42 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:37.696 06:19:42 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:37.696 06:19:42 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:37.696 06:19:42 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:37.696 06:19:42 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:37.696 06:19:42 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:37.696 06:19:42 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:37.696 06:19:42 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:24:37.696 06:19:42 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:24:37.696 06:19:42 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:37.696 06:19:42 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:37.696 06:19:42 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:37.696 06:19:42 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:37.696 06:19:42 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.696 06:19:42 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.957 06:19:42 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.957 06:19:42 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.957 06:19:42 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.957 06:19:42 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.957 06:19:42 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:24:37.957 06:19:42 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:37.957 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:37.957 06:19:42 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:24:37.957 06:19:42 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:24:37.957 06:19:42 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:24:37.957 06:19:42 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:24:37.957 06:19:42 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.957 06:19:42 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:37.957 06:19:42 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:37.957 Cannot find device "nvmf_init_br" 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@162 -- # true 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:37.957 Cannot find device "nvmf_init_br2" 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@163 -- # true 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:37.957 Cannot find device "nvmf_tgt_br" 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@164 -- # true 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:37.957 Cannot find device "nvmf_tgt_br2" 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@165 -- # true 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:37.957 Cannot find device "nvmf_init_br" 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@166 -- # true 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:37.957 Cannot find device "nvmf_init_br2" 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@167 -- # true 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:37.957 Cannot find device "nvmf_tgt_br" 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@168 -- # true 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:37.957 Cannot find device "nvmf_tgt_br2" 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@169 -- # true 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:37.957 Cannot find device "nvmf_br" 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@170 -- # true 00:24:37.957 06:19:42 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:37.957 Cannot find device "nvmf_init_if" 00:24:37.958 06:19:42 nvmf_dif -- nvmf/common.sh@171 -- # true 00:24:37.958 06:19:42 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:37.958 Cannot find device "nvmf_init_if2" 00:24:37.958 06:19:42 nvmf_dif -- nvmf/common.sh@172 -- # true 00:24:37.958 06:19:42 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:37.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:37.958 06:19:42 nvmf_dif -- nvmf/common.sh@173 -- # true 00:24:37.958 06:19:42 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:37.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:37.958 06:19:42 nvmf_dif -- nvmf/common.sh@174 -- # true 00:24:37.958 06:19:42 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:37.958 06:19:42 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:37.958 06:19:42 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:37.958 06:19:42 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:37.958 06:19:42 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:37.958 06:19:42 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:37.958 06:19:42 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:37.958 06:19:43 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:37.958 06:19:43 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:37.958 06:19:43 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:37.958 06:19:43 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:37.958 06:19:43 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:37.958 06:19:43 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:37.958 06:19:43 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:37.958 06:19:43 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:37.958 06:19:43 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:37.958 06:19:43 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:37.958 06:19:43 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:38.217 06:19:43 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:38.217 06:19:43 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:38.217 06:19:43 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:38.217 06:19:43 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:38.217 06:19:43 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:38.217 06:19:43 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:38.217 06:19:43 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:38.217 06:19:43 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:38.217 06:19:43 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:38.217 06:19:43 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:38.217 06:19:43 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:38.217 06:19:43 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:38.217 06:19:43 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:38.217 06:19:43 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:38.217 06:19:43 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:38.217 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:38.217 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:24:38.217 00:24:38.217 --- 10.0.0.3 ping statistics --- 00:24:38.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.217 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:24:38.217 06:19:43 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:38.217 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:38.217 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:24:38.217 00:24:38.217 --- 10.0.0.4 ping statistics --- 00:24:38.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.217 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:24:38.217 06:19:43 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:38.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:24:38.217 00:24:38.217 --- 10.0.0.1 ping statistics --- 00:24:38.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.217 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:24:38.217 06:19:43 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:38.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:24:38.217 00:24:38.217 --- 10.0.0.2 ping statistics --- 00:24:38.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.217 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:24:38.217 06:19:43 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.217 06:19:43 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:24:38.217 06:19:43 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:24:38.217 06:19:43 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:38.476 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:38.476 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:38.476 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:38.735 06:19:43 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.735 06:19:43 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:38.735 06:19:43 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:38.735 06:19:43 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.735 06:19:43 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:38.735 06:19:43 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:38.735 06:19:43 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:24:38.735 06:19:43 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:24:38.735 06:19:43 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:38.735 06:19:43 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:38.735 06:19:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:38.735 06:19:43 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:38.735 06:19:43 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=83251 00:24:38.735 06:19:43 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 83251 00:24:38.735 06:19:43 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 83251 ']' 00:24:38.735 06:19:43 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.735 06:19:43 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.735 06:19:43 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.735 06:19:43 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.735 06:19:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:38.735 [2024-11-27 06:19:43.665589] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:24:38.735 [2024-11-27 06:19:43.665859] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.735 [2024-11-27 06:19:43.817579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.994 [2024-11-27 06:19:43.881702] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.994 [2024-11-27 06:19:43.881783] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.994 [2024-11-27 06:19:43.881798] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.994 [2024-11-27 06:19:43.881809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.994 [2024-11-27 06:19:43.881818] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.994 [2024-11-27 06:19:43.882319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.994 [2024-11-27 06:19:43.959203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:38.994 06:19:44 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.994 06:19:44 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:24:38.994 06:19:44 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:38.994 06:19:44 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:38.994 06:19:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:38.994 06:19:44 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.994 06:19:44 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:24:38.994 06:19:44 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:24:38.994 06:19:44 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.994 06:19:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:38.994 [2024-11-27 06:19:44.083432] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.994 06:19:44 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.994 06:19:44 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:24:38.994 06:19:44 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:38.994 06:19:44 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:38.994 06:19:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:39.253 ************************************ 00:24:39.253 START TEST fio_dif_1_default 00:24:39.253 ************************************ 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:39.253 bdev_null0 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:39.253 [2024-11-27 06:19:44.127578] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:39.253 06:19:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:39.254 { 00:24:39.254 "params": { 00:24:39.254 "name": "Nvme$subsystem", 00:24:39.254 "trtype": "$TEST_TRANSPORT", 00:24:39.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.254 "adrfam": "ipv4", 00:24:39.254 "trsvcid": "$NVMF_PORT", 00:24:39.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.254 "hdgst": ${hdgst:-false}, 00:24:39.254 "ddgst": ${ddgst:-false} 00:24:39.254 }, 00:24:39.254 "method": "bdev_nvme_attach_controller" 00:24:39.254 } 00:24:39.254 EOF 00:24:39.254 )") 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:39.254 "params": { 00:24:39.254 "name": "Nvme0", 00:24:39.254 "trtype": "tcp", 00:24:39.254 "traddr": "10.0.0.3", 00:24:39.254 "adrfam": "ipv4", 00:24:39.254 "trsvcid": "4420", 00:24:39.254 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:39.254 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:39.254 "hdgst": false, 00:24:39.254 "ddgst": false 00:24:39.254 }, 00:24:39.254 "method": "bdev_nvme_attach_controller" 00:24:39.254 }' 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:39.254 06:19:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:39.513 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:39.513 fio-3.35 00:24:39.513 Starting 1 thread 00:24:51.779 00:24:51.779 filename0: (groupid=0, jobs=1): err= 0: pid=83310: Wed Nov 27 06:19:54 2024 00:24:51.779 read: IOPS=9883, BW=38.6MiB/s (40.5MB/s)(386MiB/10001msec) 00:24:51.779 slat (nsec): min=5909, max=84845, avg=7390.70, stdev=3401.50 00:24:51.779 clat (usec): min=322, max=2747, avg=382.91, stdev=46.83 00:24:51.779 lat (usec): min=328, max=2774, avg=390.31, stdev=47.45 00:24:51.779 clat percentiles (usec): 00:24:51.779 | 1.00th=[ 330], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 351], 00:24:51.779 | 30.00th=[ 355], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 383], 00:24:51.779 | 70.00th=[ 396], 80.00th=[ 412], 90.00th=[ 441], 95.00th=[ 465], 00:24:51.780 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 570], 99.95th=[ 594], 00:24:51.780 | 99.99th=[ 1958] 00:24:51.780 bw ( KiB/s): min=35008, max=42016, per=100.00%, avg=39778.68, stdev=1890.05, samples=19 00:24:51.780 iops : min= 8752, max=10504, avg=9944.63, stdev=472.55, samples=19 00:24:51.780 lat (usec) : 500=98.32%, 750=1.67% 00:24:51.780 lat (msec) : 2=0.01%, 4=0.01% 00:24:51.780 cpu : usr=83.75%, sys=14.08%, ctx=15, majf=0, minf=9 00:24:51.780 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:51.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.780 issued rwts: total=98844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.780 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:51.780 00:24:51.780 Run status group 0 (all jobs): 00:24:51.780 READ: bw=38.6MiB/s (40.5MB/s), 38.6MiB/s-38.6MiB/s (40.5MB/s-40.5MB/s), io=386MiB (405MB), run=10001-10001msec 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.780 00:24:51.780 real 0m11.040s 00:24:51.780 user 0m9.045s 00:24:51.780 sys 0m1.691s 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:51.780 ************************************ 00:24:51.780 END TEST fio_dif_1_default 00:24:51.780 ************************************ 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:51.780 06:19:55 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:24:51.780 06:19:55 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:51.780 06:19:55 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:51.780 06:19:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:51.780 ************************************ 00:24:51.780 START TEST fio_dif_1_multi_subsystems 00:24:51.780 ************************************ 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:51.780 bdev_null0 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:51.780 [2024-11-27 06:19:55.229323] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:51.780 bdev_null1 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:51.780 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:51.781 { 00:24:51.781 "params": { 00:24:51.781 "name": "Nvme$subsystem", 00:24:51.781 "trtype": "$TEST_TRANSPORT", 00:24:51.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.781 "adrfam": "ipv4", 00:24:51.781 "trsvcid": "$NVMF_PORT", 00:24:51.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.781 "hdgst": ${hdgst:-false}, 00:24:51.781 "ddgst": ${ddgst:-false} 00:24:51.781 }, 00:24:51.781 "method": "bdev_nvme_attach_controller" 00:24:51.781 } 00:24:51.781 EOF 00:24:51.781 )") 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:51.781 { 00:24:51.781 "params": { 00:24:51.781 "name": "Nvme$subsystem", 00:24:51.781 "trtype": "$TEST_TRANSPORT", 00:24:51.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.781 "adrfam": "ipv4", 00:24:51.781 "trsvcid": "$NVMF_PORT", 00:24:51.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.781 "hdgst": ${hdgst:-false}, 00:24:51.781 "ddgst": ${ddgst:-false} 00:24:51.781 }, 00:24:51.781 "method": "bdev_nvme_attach_controller" 00:24:51.781 } 00:24:51.781 EOF 00:24:51.781 )") 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:51.781 "params": { 00:24:51.781 "name": "Nvme0", 00:24:51.781 "trtype": "tcp", 00:24:51.781 "traddr": "10.0.0.3", 00:24:51.781 "adrfam": "ipv4", 00:24:51.781 "trsvcid": "4420", 00:24:51.781 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:51.781 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:51.781 "hdgst": false, 00:24:51.781 "ddgst": false 00:24:51.781 }, 00:24:51.781 "method": "bdev_nvme_attach_controller" 00:24:51.781 },{ 00:24:51.781 "params": { 00:24:51.781 "name": "Nvme1", 00:24:51.781 "trtype": "tcp", 00:24:51.781 "traddr": "10.0.0.3", 00:24:51.781 "adrfam": "ipv4", 00:24:51.781 "trsvcid": "4420", 00:24:51.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:51.781 "hdgst": false, 00:24:51.781 "ddgst": false 00:24:51.781 }, 00:24:51.781 "method": "bdev_nvme_attach_controller" 00:24:51.781 }' 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:51.781 06:19:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:51.781 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:51.781 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:51.781 fio-3.35 00:24:51.781 Starting 2 threads 00:25:01.755 00:25:01.755 filename0: (groupid=0, jobs=1): err= 0: pid=83470: Wed Nov 27 06:20:06 2024 00:25:01.755 read: IOPS=4677, BW=18.3MiB/s (19.2MB/s)(183MiB/10001msec) 00:25:01.755 slat (usec): min=5, max=115, avg=19.77, stdev= 9.65 00:25:01.755 clat (usec): min=435, max=2540, avg=801.31, stdev=88.45 00:25:01.755 lat (usec): min=442, max=2575, avg=821.08, stdev=91.13 00:25:01.755 clat percentiles (usec): 00:25:01.755 | 1.00th=[ 644], 5.00th=[ 676], 10.00th=[ 693], 20.00th=[ 717], 00:25:01.755 | 30.00th=[ 742], 40.00th=[ 766], 50.00th=[ 791], 60.00th=[ 824], 00:25:01.755 | 70.00th=[ 848], 80.00th=[ 881], 90.00th=[ 914], 95.00th=[ 947], 00:25:01.755 | 99.00th=[ 1020], 99.50th=[ 1045], 99.90th=[ 1123], 99.95th=[ 1156], 00:25:01.755 | 99.99th=[ 1254] 00:25:01.755 bw ( KiB/s): min=16768, max=20151, per=50.28%, avg=18815.95, stdev=1275.30, samples=19 00:25:01.755 iops : min= 4192, max= 5037, avg=4703.95, stdev=318.78, samples=19 00:25:01.755 lat (usec) : 500=0.02%, 750=32.60%, 1000=65.88% 00:25:01.755 lat (msec) : 2=1.50%, 4=0.01% 00:25:01.755 cpu : usr=93.37%, sys=5.18%, ctx=13, majf=0, minf=0 00:25:01.755 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:01.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.755 issued rwts: total=46784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.755 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:01.755 filename1: (groupid=0, jobs=1): err= 0: pid=83471: Wed Nov 27 06:20:06 2024 00:25:01.755 read: IOPS=4677, BW=18.3MiB/s (19.2MB/s)(183MiB/10001msec) 00:25:01.755 slat (usec): min=5, max=204, avg=19.91, stdev= 9.85 00:25:01.755 clat (usec): min=555, max=3521, avg=801.17, stdev=97.62 00:25:01.755 lat (usec): min=563, max=3547, avg=821.07, stdev=100.15 00:25:01.755 clat percentiles (usec): 00:25:01.755 | 1.00th=[ 627], 5.00th=[ 660], 10.00th=[ 676], 20.00th=[ 717], 00:25:01.755 | 30.00th=[ 742], 40.00th=[ 766], 50.00th=[ 799], 60.00th=[ 824], 00:25:01.755 | 70.00th=[ 857], 80.00th=[ 889], 90.00th=[ 930], 95.00th=[ 955], 00:25:01.755 | 99.00th=[ 1029], 99.50th=[ 1057], 99.90th=[ 1123], 99.95th=[ 1156], 00:25:01.755 | 99.99th=[ 1303] 00:25:01.755 bw ( KiB/s): min=16768, max=20151, per=50.27%, avg=18813.84, stdev=1273.03, samples=19 00:25:01.755 iops : min= 4192, max= 5037, avg=4703.42, stdev=318.21, samples=19 00:25:01.755 lat (usec) : 750=32.75%, 1000=65.38% 00:25:01.755 lat (msec) : 2=1.86%, 4=0.01% 00:25:01.755 cpu : usr=91.84%, sys=6.43%, ctx=27, majf=0, minf=0 00:25:01.755 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:01.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.755 issued rwts: total=46776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.755 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:01.755 00:25:01.755 Run status group 0 (all jobs): 00:25:01.755 READ: bw=36.5MiB/s (38.3MB/s), 18.3MiB/s-18.3MiB/s (19.2MB/s-19.2MB/s), io=365MiB (383MB), run=10001-10001msec 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.755 00:25:01.755 real 0m11.154s 00:25:01.755 user 0m19.291s 00:25:01.755 sys 0m1.434s 00:25:01.755 ************************************ 00:25:01.755 END TEST fio_dif_1_multi_subsystems 00:25:01.755 ************************************ 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:01.755 06:20:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:01.755 06:20:06 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:01.755 06:20:06 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:01.755 06:20:06 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:01.755 06:20:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:01.755 ************************************ 00:25:01.755 START TEST fio_dif_rand_params 00:25:01.755 ************************************ 00:25:01.755 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:25:01.755 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:25:01.755 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:01.755 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:01.756 bdev_null0 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:01.756 [2024-11-27 06:20:06.448444] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:01.756 { 00:25:01.756 "params": { 00:25:01.756 "name": "Nvme$subsystem", 00:25:01.756 "trtype": "$TEST_TRANSPORT", 00:25:01.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.756 "adrfam": "ipv4", 00:25:01.756 "trsvcid": "$NVMF_PORT", 00:25:01.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.756 "hdgst": ${hdgst:-false}, 00:25:01.756 "ddgst": ${ddgst:-false} 00:25:01.756 }, 00:25:01.756 "method": "bdev_nvme_attach_controller" 00:25:01.756 } 00:25:01.756 EOF 00:25:01.756 )") 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:01.756 "params": { 00:25:01.756 "name": "Nvme0", 00:25:01.756 "trtype": "tcp", 00:25:01.756 "traddr": "10.0.0.3", 00:25:01.756 "adrfam": "ipv4", 00:25:01.756 "trsvcid": "4420", 00:25:01.756 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:01.756 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:01.756 "hdgst": false, 00:25:01.756 "ddgst": false 00:25:01.756 }, 00:25:01.756 "method": "bdev_nvme_attach_controller" 00:25:01.756 }' 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:01.756 06:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:01.756 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:01.756 ... 00:25:01.756 fio-3.35 00:25:01.756 Starting 3 threads 00:25:08.343 00:25:08.343 filename0: (groupid=0, jobs=1): err= 0: pid=83627: Wed Nov 27 06:20:12 2024 00:25:08.343 read: IOPS=224, BW=28.1MiB/s (29.4MB/s)(141MiB/5007msec) 00:25:08.343 slat (usec): min=5, max=107, avg=23.83, stdev=17.44 00:25:08.343 clat (usec): min=9616, max=16347, avg=13295.57, stdev=1473.88 00:25:08.343 lat (usec): min=9639, max=16360, avg=13319.40, stdev=1472.32 00:25:08.343 clat percentiles (usec): 00:25:08.343 | 1.00th=[ 9765], 5.00th=[10290], 10.00th=[10814], 20.00th=[12518], 00:25:08.343 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13304], 60.00th=[13698], 00:25:08.343 | 70.00th=[13960], 80.00th=[14484], 90.00th=[15139], 95.00th=[15401], 00:25:08.343 | 99.00th=[16057], 99.50th=[16319], 99.90th=[16319], 99.95th=[16319], 00:25:08.343 | 99.99th=[16319] 00:25:08.343 bw ( KiB/s): min=25344, max=35328, per=33.32%, avg=28744.33, stdev=2822.77, samples=9 00:25:08.343 iops : min= 198, max= 276, avg=224.56, stdev=22.05, samples=9 00:25:08.343 lat (msec) : 10=4.00%, 20=96.00% 00:25:08.343 cpu : usr=93.83%, sys=5.43%, ctx=17, majf=0, minf=0 00:25:08.343 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:08.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.343 issued rwts: total=1125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.343 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:08.343 filename0: (groupid=0, jobs=1): err= 0: pid=83628: Wed Nov 27 06:20:12 2024 00:25:08.343 read: IOPS=224, BW=28.1MiB/s (29.5MB/s)(141MiB/5006msec) 00:25:08.343 slat (usec): min=4, max=107, avg=23.46, stdev=17.21 00:25:08.343 clat (usec): min=9638, max=18017, avg=13294.24, stdev=1492.53 00:25:08.343 lat (usec): min=9661, max=18030, avg=13317.70, stdev=1490.98 00:25:08.343 clat percentiles (usec): 00:25:08.343 | 1.00th=[ 9765], 5.00th=[10159], 10.00th=[10683], 20.00th=[12518], 00:25:08.343 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:25:08.343 | 70.00th=[13960], 80.00th=[14615], 90.00th=[15139], 95.00th=[15533], 00:25:08.343 | 99.00th=[15926], 99.50th=[16188], 99.90th=[17957], 99.95th=[17957], 00:25:08.343 | 99.99th=[17957] 00:25:08.343 bw ( KiB/s): min=25394, max=35328, per=33.32%, avg=28749.89, stdev=2815.28, samples=9 00:25:08.343 iops : min= 198, max= 276, avg=224.56, stdev=22.05, samples=9 00:25:08.343 lat (msec) : 10=4.09%, 20=95.91% 00:25:08.343 cpu : usr=93.85%, sys=5.39%, ctx=148, majf=0, minf=0 00:25:08.343 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:08.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.343 issued rwts: total=1125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.343 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:08.343 filename0: (groupid=0, jobs=1): err= 0: pid=83629: Wed Nov 27 06:20:12 2024 00:25:08.343 read: IOPS=224, BW=28.1MiB/s (29.5MB/s)(141MiB/5001msec) 00:25:08.343 slat (nsec): min=6644, max=86953, avg=20502.37, stdev=13180.71 00:25:08.343 clat (usec): min=7582, max=16230, avg=13287.52, stdev=1497.45 00:25:08.343 lat (usec): min=7599, max=16246, avg=13308.02, stdev=1495.54 00:25:08.343 clat percentiles (usec): 00:25:08.343 | 1.00th=[ 9765], 5.00th=[10159], 10.00th=[10683], 20.00th=[12518], 00:25:08.343 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:25:08.343 | 70.00th=[13960], 80.00th=[14615], 90.00th=[15139], 95.00th=[15401], 00:25:08.343 | 99.00th=[16057], 99.50th=[16057], 99.90th=[16188], 99.95th=[16188], 00:25:08.343 | 99.99th=[16188] 00:25:08.343 bw ( KiB/s): min=26059, max=36096, per=33.42%, avg=28836.78, stdev=2958.51, samples=9 00:25:08.343 iops : min= 203, max= 282, avg=225.22, stdev=23.18, samples=9 00:25:08.343 lat (msec) : 10=4.27%, 20=95.73% 00:25:08.343 cpu : usr=92.16%, sys=6.56%, ctx=75, majf=0, minf=0 00:25:08.343 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:08.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.343 issued rwts: total=1125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.343 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:08.343 00:25:08.343 Run status group 0 (all jobs): 00:25:08.343 READ: bw=84.3MiB/s (88.3MB/s), 28.1MiB/s-28.1MiB/s (29.4MB/s-29.5MB/s), io=422MiB (442MB), run=5001-5007msec 00:25:08.343 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:08.343 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:08.343 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:08.343 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:08.343 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:08.343 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:08.343 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.343 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.343 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.343 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:08.343 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.343 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.343 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.343 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:25:08.343 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:25:08.343 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:25:08.343 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:25:08.343 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:25:08.343 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:25:08.343 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:08.343 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:08.343 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.344 bdev_null0 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.344 [2024-11-27 06:20:12.522180] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.344 bdev_null1 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.344 bdev_null2 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:08.344 { 00:25:08.344 "params": { 00:25:08.344 "name": "Nvme$subsystem", 00:25:08.344 "trtype": "$TEST_TRANSPORT", 00:25:08.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.344 "adrfam": "ipv4", 00:25:08.344 "trsvcid": "$NVMF_PORT", 00:25:08.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.344 "hdgst": ${hdgst:-false}, 00:25:08.344 "ddgst": ${ddgst:-false} 00:25:08.344 }, 00:25:08.344 "method": "bdev_nvme_attach_controller" 00:25:08.344 } 00:25:08.344 EOF 00:25:08.344 )") 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:08.344 { 00:25:08.344 "params": { 00:25:08.344 "name": "Nvme$subsystem", 00:25:08.344 "trtype": "$TEST_TRANSPORT", 00:25:08.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.344 "adrfam": "ipv4", 00:25:08.344 "trsvcid": "$NVMF_PORT", 00:25:08.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.344 "hdgst": ${hdgst:-false}, 00:25:08.344 "ddgst": ${ddgst:-false} 00:25:08.344 }, 00:25:08.344 "method": "bdev_nvme_attach_controller" 00:25:08.344 } 00:25:08.344 EOF 00:25:08.344 )") 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:08.344 06:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:08.344 { 00:25:08.344 "params": { 00:25:08.344 "name": "Nvme$subsystem", 00:25:08.344 "trtype": "$TEST_TRANSPORT", 00:25:08.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.344 "adrfam": "ipv4", 00:25:08.344 "trsvcid": "$NVMF_PORT", 00:25:08.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.344 "hdgst": ${hdgst:-false}, 00:25:08.345 "ddgst": ${ddgst:-false} 00:25:08.345 }, 00:25:08.345 "method": "bdev_nvme_attach_controller" 00:25:08.345 } 00:25:08.345 EOF 00:25:08.345 )") 00:25:08.345 06:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:25:08.345 06:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:25:08.345 06:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:25:08.345 06:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:08.345 "params": { 00:25:08.345 "name": "Nvme0", 00:25:08.345 "trtype": "tcp", 00:25:08.345 "traddr": "10.0.0.3", 00:25:08.345 "adrfam": "ipv4", 00:25:08.345 "trsvcid": "4420", 00:25:08.345 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:08.345 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:08.345 "hdgst": false, 00:25:08.345 "ddgst": false 00:25:08.345 }, 00:25:08.345 "method": "bdev_nvme_attach_controller" 00:25:08.345 },{ 00:25:08.345 "params": { 00:25:08.345 "name": "Nvme1", 00:25:08.345 "trtype": "tcp", 00:25:08.345 "traddr": "10.0.0.3", 00:25:08.345 "adrfam": "ipv4", 00:25:08.345 "trsvcid": "4420", 00:25:08.345 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.345 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:08.345 "hdgst": false, 00:25:08.345 "ddgst": false 00:25:08.345 }, 00:25:08.345 "method": "bdev_nvme_attach_controller" 00:25:08.345 },{ 00:25:08.345 "params": { 00:25:08.345 "name": "Nvme2", 00:25:08.345 "trtype": "tcp", 00:25:08.345 "traddr": "10.0.0.3", 00:25:08.345 "adrfam": "ipv4", 00:25:08.345 "trsvcid": "4420", 00:25:08.345 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:08.345 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:08.345 "hdgst": false, 00:25:08.345 "ddgst": false 00:25:08.345 }, 00:25:08.345 "method": "bdev_nvme_attach_controller" 00:25:08.345 }' 00:25:08.345 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:08.345 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:08.345 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:08.345 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:08.345 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:08.345 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:08.345 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:08.345 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:08.345 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:08.345 06:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:08.345 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:08.345 ... 00:25:08.345 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:08.345 ... 00:25:08.345 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:08.345 ... 00:25:08.345 fio-3.35 00:25:08.345 Starting 24 threads 00:25:20.543 00:25:20.543 filename0: (groupid=0, jobs=1): err= 0: pid=83728: Wed Nov 27 06:20:24 2024 00:25:20.543 read: IOPS=278, BW=1115KiB/s (1142kB/s)(11.0MiB/10075msec) 00:25:20.543 slat (usec): min=6, max=8049, avg=30.26, stdev=302.79 00:25:20.543 clat (msec): min=3, max=143, avg=57.11, stdev=31.88 00:25:20.543 lat (msec): min=3, max=143, avg=57.14, stdev=31.88 00:25:20.543 clat percentiles (msec): 00:25:20.543 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 24], 00:25:20.543 | 30.00th=[ 34], 40.00th=[ 48], 50.00th=[ 61], 60.00th=[ 72], 00:25:20.543 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 101], 95.00th=[ 109], 00:25:20.543 | 99.00th=[ 125], 99.50th=[ 129], 99.90th=[ 138], 99.95th=[ 144], 00:25:20.543 | 99.99th=[ 144] 00:25:20.543 bw ( KiB/s): min= 640, max= 4151, per=4.43%, avg=1115.15, stdev=823.56, samples=20 00:25:20.543 iops : min= 160, max= 1037, avg=278.70, stdev=205.77, samples=20 00:25:20.543 lat (msec) : 4=0.07%, 10=0.64%, 20=15.46%, 50=26.21%, 100=47.61% 00:25:20.543 lat (msec) : 250=10.01% 00:25:20.543 cpu : usr=35.42%, sys=1.34%, ctx=1037, majf=0, minf=9 00:25:20.543 IO depths : 1=0.5%, 2=2.7%, 4=9.3%, 8=72.8%, 16=14.7%, 32=0.0%, >=64=0.0% 00:25:20.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.543 complete : 0=0.0%, 4=90.1%, 8=7.8%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.543 issued rwts: total=2808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:20.543 filename0: (groupid=0, jobs=1): err= 0: pid=83729: Wed Nov 27 06:20:24 2024 00:25:20.543 read: IOPS=255, BW=1024KiB/s (1048kB/s)(10.0MiB/10051msec) 00:25:20.543 slat (usec): min=5, max=8985, avg=30.28, stdev=309.13 00:25:20.543 clat (msec): min=9, max=148, avg=62.27, stdev=35.47 00:25:20.543 lat (msec): min=9, max=148, avg=62.30, stdev=35.48 00:25:20.543 clat percentiles (msec): 00:25:20.543 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 24], 00:25:20.543 | 30.00th=[ 34], 40.00th=[ 53], 50.00th=[ 71], 60.00th=[ 77], 00:25:20.543 | 70.00th=[ 82], 80.00th=[ 94], 90.00th=[ 108], 95.00th=[ 120], 00:25:20.543 | 99.00th=[ 140], 99.50th=[ 148], 99.90th=[ 148], 99.95th=[ 148], 00:25:20.543 | 99.99th=[ 148] 00:25:20.543 bw ( KiB/s): min= 512, max= 4216, per=4.06%, avg=1022.80, stdev=835.48, samples=20 00:25:20.543 iops : min= 128, max= 1054, avg=255.70, stdev=208.87, samples=20 00:25:20.543 lat (msec) : 10=0.12%, 20=16.33%, 50=22.86%, 100=44.25%, 250=16.45% 00:25:20.543 cpu : usr=39.49%, sys=2.19%, ctx=1826, majf=0, minf=9 00:25:20.543 IO depths : 1=0.7%, 2=5.2%, 4=18.6%, 8=62.4%, 16=13.1%, 32=0.0%, >=64=0.0% 00:25:20.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.543 complete : 0=0.0%, 4=92.7%, 8=3.1%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.543 issued rwts: total=2572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:20.543 filename0: (groupid=0, jobs=1): err= 0: pid=83730: Wed Nov 27 06:20:24 2024 00:25:20.543 read: IOPS=237, BW=952KiB/s (975kB/s)(9568KiB/10052msec) 00:25:20.543 slat (usec): min=4, max=8027, avg=35.42, stdev=365.27 00:25:20.543 clat (msec): min=11, max=151, avg=66.99, stdev=24.49 00:25:20.543 lat (msec): min=11, max=151, avg=67.02, stdev=24.50 00:25:20.543 clat percentiles (msec): 00:25:20.543 | 1.00th=[ 24], 5.00th=[ 25], 10.00th=[ 35], 20.00th=[ 46], 00:25:20.543 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 73], 00:25:20.543 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 100], 95.00th=[ 109], 00:25:20.543 | 99.00th=[ 121], 99.50th=[ 129], 99.90th=[ 134], 99.95th=[ 136], 00:25:20.543 | 99.99th=[ 153] 00:25:20.543 bw ( KiB/s): min= 608, max= 2048, per=3.78%, avg=951.45, stdev=327.82, samples=20 00:25:20.543 iops : min= 152, max= 512, avg=237.85, stdev=81.94, samples=20 00:25:20.543 lat (msec) : 20=0.17%, 50=29.22%, 100=60.62%, 250=9.99% 00:25:20.543 cpu : usr=33.20%, sys=1.55%, ctx=899, majf=0, minf=9 00:25:20.543 IO depths : 1=0.3%, 2=1.2%, 4=3.6%, 8=78.7%, 16=16.2%, 32=0.0%, >=64=0.0% 00:25:20.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.543 complete : 0=0.0%, 4=88.7%, 8=10.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.543 issued rwts: total=2392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:20.543 filename0: (groupid=0, jobs=1): err= 0: pid=83731: Wed Nov 27 06:20:24 2024 00:25:20.543 read: IOPS=270, BW=1080KiB/s (1106kB/s)(10.6MiB/10003msec) 00:25:20.543 slat (usec): min=7, max=8105, avg=47.35, stdev=415.90 00:25:20.543 clat (msec): min=5, max=132, avg=59.03, stdev=28.02 00:25:20.543 lat (msec): min=5, max=132, avg=59.08, stdev=28.03 00:25:20.543 clat percentiles (msec): 00:25:20.543 | 1.00th=[ 10], 5.00th=[ 16], 10.00th=[ 19], 20.00th=[ 31], 00:25:20.543 | 30.00th=[ 47], 40.00th=[ 52], 50.00th=[ 59], 60.00th=[ 70], 00:25:20.543 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 108], 00:25:20.543 | 99.00th=[ 123], 99.50th=[ 125], 99.90th=[ 132], 99.95th=[ 132], 00:25:20.543 | 99.99th=[ 132] 00:25:20.543 bw ( KiB/s): min= 624, max= 1984, per=3.81%, avg=959.05, stdev=282.28, samples=19 00:25:20.543 iops : min= 156, max= 496, avg=239.74, stdev=70.57, samples=19 00:25:20.543 lat (msec) : 10=1.41%, 20=10.03%, 50=27.24%, 100=53.44%, 250=7.88% 00:25:20.543 cpu : usr=37.82%, sys=1.81%, ctx=1039, majf=0, minf=9 00:25:20.543 IO depths : 1=0.1%, 2=0.1%, 4=0.6%, 8=83.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:25:20.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.543 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.543 issued rwts: total=2702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:20.543 filename0: (groupid=0, jobs=1): err= 0: pid=83732: Wed Nov 27 06:20:24 2024 00:25:20.543 read: IOPS=259, BW=1036KiB/s (1061kB/s)(10.2MiB/10031msec) 00:25:20.543 slat (usec): min=4, max=8040, avg=33.81, stdev=304.72 00:25:20.543 clat (msec): min=13, max=137, avg=61.57, stdev=27.27 00:25:20.543 lat (msec): min=13, max=137, avg=61.60, stdev=27.27 00:25:20.543 clat percentiles (msec): 00:25:20.543 | 1.00th=[ 15], 5.00th=[ 17], 10.00th=[ 23], 20.00th=[ 36], 00:25:20.543 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 63], 60.00th=[ 72], 00:25:20.543 | 70.00th=[ 77], 80.00th=[ 83], 90.00th=[ 99], 95.00th=[ 109], 00:25:20.543 | 99.00th=[ 121], 99.50th=[ 129], 99.90th=[ 130], 99.95th=[ 132], 00:25:20.543 | 99.99th=[ 138] 00:25:20.543 bw ( KiB/s): min= 640, max= 1992, per=3.72%, avg=937.95, stdev=275.78, samples=19 00:25:20.543 iops : min= 160, max= 498, avg=234.42, stdev=68.92, samples=19 00:25:20.543 lat (msec) : 20=7.93%, 50=26.59%, 100=56.56%, 250=8.93% 00:25:20.543 cpu : usr=43.42%, sys=1.96%, ctx=1047, majf=0, minf=9 00:25:20.543 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.7%, 16=16.3%, 32=0.0%, >=64=0.0% 00:25:20.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.543 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.543 issued rwts: total=2599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:20.543 filename0: (groupid=0, jobs=1): err= 0: pid=83733: Wed Nov 27 06:20:24 2024 00:25:20.543 read: IOPS=280, BW=1123KiB/s (1150kB/s)(11.0MiB/10027msec) 00:25:20.543 slat (usec): min=5, max=8057, avg=38.57, stdev=385.66 00:25:20.543 clat (msec): min=8, max=142, avg=56.79, stdev=30.56 00:25:20.543 lat (msec): min=8, max=142, avg=56.83, stdev=30.58 00:25:20.543 clat percentiles (msec): 00:25:20.543 | 1.00th=[ 12], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 24], 00:25:20.543 | 30.00th=[ 35], 40.00th=[ 48], 50.00th=[ 61], 60.00th=[ 71], 00:25:20.543 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:25:20.543 | 99.00th=[ 124], 99.50th=[ 128], 99.90th=[ 132], 99.95th=[ 132], 00:25:20.543 | 99.99th=[ 142] 00:25:20.543 bw ( KiB/s): min= 560, max= 4048, per=4.46%, avg=1121.75, stdev=798.44, samples=20 00:25:20.543 iops : min= 140, max= 1012, avg=280.30, stdev=199.64, samples=20 00:25:20.543 lat (msec) : 10=0.28%, 20=14.50%, 50=28.18%, 100=49.22%, 250=7.82% 00:25:20.543 cpu : usr=32.55%, sys=1.43%, ctx=920, majf=0, minf=9 00:25:20.543 IO depths : 1=0.6%, 2=2.8%, 4=8.8%, 8=73.3%, 16=14.4%, 32=0.0%, >=64=0.0% 00:25:20.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.543 complete : 0=0.0%, 4=89.7%, 8=8.3%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.543 issued rwts: total=2814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:20.543 filename0: (groupid=0, jobs=1): err= 0: pid=83734: Wed Nov 27 06:20:24 2024 00:25:20.543 read: IOPS=234, BW=940KiB/s (962kB/s)(9440KiB/10046msec) 00:25:20.544 slat (usec): min=4, max=8048, avg=45.50, stdev=374.02 00:25:20.544 clat (msec): min=9, max=148, avg=67.80, stdev=26.51 00:25:20.544 lat (msec): min=9, max=148, avg=67.84, stdev=26.51 00:25:20.544 clat percentiles (msec): 00:25:20.544 | 1.00th=[ 24], 5.00th=[ 26], 10.00th=[ 32], 20.00th=[ 45], 00:25:20.544 | 30.00th=[ 51], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 75], 00:25:20.544 | 70.00th=[ 82], 80.00th=[ 90], 90.00th=[ 104], 95.00th=[ 113], 00:25:20.544 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 148], 99.95th=[ 148], 00:25:20.544 | 99.99th=[ 148] 00:25:20.544 bw ( KiB/s): min= 608, max= 2160, per=3.65%, avg=918.37, stdev=347.72, samples=19 00:25:20.544 iops : min= 152, max= 540, avg=229.58, stdev=86.93, samples=19 00:25:20.544 lat (msec) : 10=0.08%, 20=0.34%, 50=28.31%, 100=59.53%, 250=11.74% 00:25:20.544 cpu : usr=37.04%, sys=1.37%, ctx=1145, majf=0, minf=9 00:25:20.544 IO depths : 1=0.1%, 2=2.4%, 4=9.3%, 8=73.2%, 16=15.0%, 32=0.0%, >=64=0.0% 00:25:20.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.544 complete : 0=0.0%, 4=89.9%, 8=8.1%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.544 issued rwts: total=2360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.544 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:20.544 filename0: (groupid=0, jobs=1): err= 0: pid=83735: Wed Nov 27 06:20:24 2024 00:25:20.544 read: IOPS=256, BW=1025KiB/s (1049kB/s)(10.0MiB/10010msec) 00:25:20.544 slat (usec): min=4, max=8001, avg=29.34, stdev=209.40 00:25:20.544 clat (msec): min=5, max=135, avg=62.33, stdev=28.61 00:25:20.544 lat (msec): min=5, max=135, avg=62.36, stdev=28.61 00:25:20.544 clat percentiles (msec): 00:25:20.544 | 1.00th=[ 12], 5.00th=[ 16], 10.00th=[ 21], 20.00th=[ 32], 00:25:20.544 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 64], 60.00th=[ 71], 00:25:20.544 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 102], 95.00th=[ 112], 00:25:20.544 | 99.00th=[ 127], 99.50th=[ 128], 99.90th=[ 131], 99.95th=[ 136], 00:25:20.544 | 99.99th=[ 136] 00:25:20.544 bw ( KiB/s): min= 624, max= 1984, per=3.64%, avg=916.16, stdev=284.05, samples=19 00:25:20.544 iops : min= 156, max= 496, avg=229.00, stdev=71.01, samples=19 00:25:20.544 lat (msec) : 10=0.51%, 20=8.93%, 50=25.16%, 100=54.84%, 250=10.57% 00:25:20.544 cpu : usr=40.14%, sys=1.78%, ctx=1186, majf=0, minf=9 00:25:20.544 IO depths : 1=0.1%, 2=1.1%, 4=4.3%, 8=78.9%, 16=15.8%, 32=0.0%, >=64=0.0% 00:25:20.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.544 complete : 0=0.0%, 4=88.4%, 8=10.6%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.544 issued rwts: total=2564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.544 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:20.544 filename1: (groupid=0, jobs=1): err= 0: pid=83736: Wed Nov 27 06:20:24 2024 00:25:20.544 read: IOPS=268, BW=1074KiB/s (1100kB/s)(10.5MiB/10001msec) 00:25:20.544 slat (usec): min=3, max=8062, avg=47.41, stdev=432.18 00:25:20.544 clat (usec): min=1071, max=143951, avg=59372.75, stdev=29317.20 00:25:20.544 lat (usec): min=1082, max=143962, avg=59420.16, stdev=29315.03 00:25:20.544 clat percentiles (msec): 00:25:20.544 | 1.00th=[ 3], 5.00th=[ 15], 10.00th=[ 18], 20.00th=[ 27], 00:25:20.544 | 30.00th=[ 46], 40.00th=[ 51], 50.00th=[ 61], 60.00th=[ 72], 00:25:20.544 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 109], 00:25:20.544 | 99.00th=[ 125], 99.50th=[ 128], 99.90th=[ 132], 99.95th=[ 144], 00:25:20.544 | 99.99th=[ 144] 00:25:20.544 bw ( KiB/s): min= 640, max= 2752, per=3.88%, avg=977.42, stdev=452.74, samples=19 00:25:20.544 iops : min= 160, max= 688, avg=244.26, stdev=113.17, samples=19 00:25:20.544 lat (msec) : 2=0.34%, 4=1.19%, 10=1.23%, 20=9.90%, 50=26.99% 00:25:20.544 lat (msec) : 100=52.16%, 250=8.19% 00:25:20.544 cpu : usr=35.67%, sys=1.36%, ctx=1016, majf=0, minf=9 00:25:20.544 IO depths : 1=0.1%, 2=1.1%, 4=4.4%, 8=79.0%, 16=15.4%, 32=0.0%, >=64=0.0% 00:25:20.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.544 complete : 0=0.0%, 4=88.2%, 8=10.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.544 issued rwts: total=2686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.544 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:20.544 filename1: (groupid=0, jobs=1): err= 0: pid=83737: Wed Nov 27 06:20:24 2024 00:25:20.544 read: IOPS=266, BW=1065KiB/s (1090kB/s)(10.4MiB/10006msec) 00:25:20.544 slat (usec): min=3, max=8004, avg=29.71, stdev=215.68 00:25:20.544 clat (msec): min=5, max=143, avg=59.99, stdev=27.67 00:25:20.544 lat (msec): min=5, max=144, avg=60.01, stdev=27.66 00:25:20.544 clat percentiles (msec): 00:25:20.544 | 1.00th=[ 12], 5.00th=[ 15], 10.00th=[ 21], 20.00th=[ 33], 00:25:20.544 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 61], 60.00th=[ 71], 00:25:20.544 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 108], 00:25:20.544 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:25:20.544 | 99.99th=[ 144] 00:25:20.544 bw ( KiB/s): min= 656, max= 1896, per=3.77%, avg=948.95, stdev=259.49, samples=19 00:25:20.544 iops : min= 164, max= 474, avg=237.21, stdev=64.86, samples=19 00:25:20.544 lat (msec) : 10=0.23%, 20=9.50%, 50=28.28%, 100=53.36%, 250=8.64% 00:25:20.544 cpu : usr=37.85%, sys=1.65%, ctx=1074, majf=0, minf=9 00:25:20.544 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=83.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:25:20.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.544 complete : 0=0.0%, 4=87.1%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.544 issued rwts: total=2663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.544 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:20.544 filename1: (groupid=0, jobs=1): err= 0: pid=83738: Wed Nov 27 06:20:24 2024 00:25:20.544 read: IOPS=252, BW=1012KiB/s (1036kB/s)(9.92MiB/10041msec) 00:25:20.544 slat (usec): min=6, max=8039, avg=29.02, stdev=238.36 00:25:20.544 clat (msec): min=3, max=160, avg=63.02, stdev=37.35 00:25:20.544 lat (msec): min=3, max=160, avg=63.05, stdev=37.36 00:25:20.544 clat percentiles (msec): 00:25:20.544 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 22], 00:25:20.544 | 30.00th=[ 30], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 77], 00:25:20.544 | 70.00th=[ 83], 80.00th=[ 99], 90.00th=[ 110], 95.00th=[ 121], 00:25:20.544 | 99.00th=[ 150], 99.50th=[ 153], 99.90th=[ 155], 99.95th=[ 161], 00:25:20.544 | 99.99th=[ 161] 00:25:20.544 bw ( KiB/s): min= 512, max= 4295, per=4.01%, avg=1009.15, stdev=868.48, samples=20 00:25:20.544 iops : min= 128, max= 1073, avg=252.25, stdev=216.97, samples=20 00:25:20.544 lat (msec) : 4=0.08%, 10=0.63%, 20=17.64%, 50=20.47%, 100=42.76% 00:25:20.544 lat (msec) : 250=18.43% 00:25:20.544 cpu : usr=42.08%, sys=1.88%, ctx=1175, majf=0, minf=9 00:25:20.544 IO depths : 1=0.6%, 2=6.2%, 4=23.0%, 8=57.8%, 16=12.4%, 32=0.0%, >=64=0.0% 00:25:20.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.544 complete : 0=0.0%, 4=93.9%, 8=0.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.544 issued rwts: total=2540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.544 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:20.544 filename1: (groupid=0, jobs=1): err= 0: pid=83739: Wed Nov 27 06:20:24 2024 00:25:20.544 read: IOPS=285, BW=1142KiB/s (1169kB/s)(11.2MiB/10073msec) 00:25:20.544 slat (usec): min=4, max=3044, avg=19.70, stdev=110.89 00:25:20.544 clat (usec): min=1578, max=133554, avg=55780.97, stdev=34851.03 00:25:20.544 lat (usec): min=1585, max=133563, avg=55800.68, stdev=34849.54 00:25:20.544 clat percentiles (usec): 00:25:20.544 | 1.00th=[ 1680], 5.00th=[ 8094], 10.00th=[ 10814], 20.00th=[ 15008], 00:25:20.544 | 30.00th=[ 26084], 40.00th=[ 44303], 50.00th=[ 61080], 60.00th=[ 73925], 00:25:20.544 | 70.00th=[ 79168], 80.00th=[ 86508], 90.00th=[101188], 95.00th=[110625], 00:25:20.544 | 99.00th=[128451], 99.50th=[132645], 99.90th=[132645], 99.95th=[132645], 00:25:20.544 | 99.99th=[133694] 00:25:20.544 bw ( KiB/s): min= 624, max= 5370, per=4.55%, avg=1144.85, stdev=1081.23, samples=20 00:25:20.544 iops : min= 156, max= 1342, avg=286.15, stdev=270.22, samples=20 00:25:20.544 lat (msec) : 2=2.23%, 4=0.56%, 10=5.77%, 20=15.65%, 50=18.82% 00:25:20.544 lat (msec) : 100=45.91%, 250=11.06% 00:25:20.544 cpu : usr=41.72%, sys=2.28%, ctx=1822, majf=0, minf=0 00:25:20.544 IO depths : 1=0.6%, 2=4.3%, 4=15.5%, 8=65.8%, 16=13.8%, 32=0.0%, >=64=0.0% 00:25:20.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.544 complete : 0=0.0%, 4=91.8%, 8=4.7%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.544 issued rwts: total=2875,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.544 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:20.544 filename1: (groupid=0, jobs=1): err= 0: pid=83740: Wed Nov 27 06:20:24 2024 00:25:20.544 read: IOPS=276, BW=1104KiB/s (1131kB/s)(10.8MiB/10050msec) 00:25:20.544 slat (usec): min=6, max=4070, avg=25.03, stdev=150.74 00:25:20.544 clat (msec): min=5, max=163, avg=57.78, stdev=32.91 00:25:20.544 lat (msec): min=5, max=163, avg=57.80, stdev=32.91 00:25:20.544 clat percentiles (msec): 00:25:20.544 | 1.00th=[ 9], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 22], 00:25:20.544 | 30.00th=[ 32], 40.00th=[ 48], 50.00th=[ 63], 60.00th=[ 72], 00:25:20.544 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 99], 95.00th=[ 110], 00:25:20.544 | 99.00th=[ 130], 99.50th=[ 142], 99.90th=[ 165], 99.95th=[ 165], 00:25:20.544 | 99.99th=[ 165] 00:25:20.544 bw ( KiB/s): min= 616, max= 4544, per=4.38%, avg=1103.20, stdev=895.46, samples=20 00:25:20.544 iops : min= 154, max= 1136, avg=275.80, stdev=223.86, samples=20 00:25:20.544 lat (msec) : 10=1.23%, 20=17.23%, 50=23.97%, 100=47.84%, 250=9.73% 00:25:20.544 cpu : usr=38.46%, sys=1.39%, ctx=1091, majf=0, minf=9 00:25:20.544 IO depths : 1=0.4%, 2=3.1%, 4=11.2%, 8=70.8%, 16=14.5%, 32=0.0%, >=64=0.0% 00:25:20.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.544 complete : 0=0.0%, 4=90.7%, 8=6.7%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.544 issued rwts: total=2774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.544 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:20.544 filename1: (groupid=0, jobs=1): err= 0: pid=83741: Wed Nov 27 06:20:24 2024 00:25:20.544 read: IOPS=248, BW=992KiB/s (1016kB/s)(9948KiB/10028msec) 00:25:20.544 slat (usec): min=4, max=8046, avg=33.78, stdev=274.01 00:25:20.544 clat (msec): min=10, max=168, avg=64.33, stdev=29.70 00:25:20.545 lat (msec): min=10, max=168, avg=64.37, stdev=29.69 00:25:20.545 clat percentiles (msec): 00:25:20.545 | 1.00th=[ 15], 5.00th=[ 17], 10.00th=[ 22], 20.00th=[ 35], 00:25:20.545 | 30.00th=[ 48], 40.00th=[ 58], 50.00th=[ 68], 60.00th=[ 73], 00:25:20.545 | 70.00th=[ 82], 80.00th=[ 88], 90.00th=[ 106], 95.00th=[ 115], 00:25:20.545 | 99.00th=[ 131], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 169], 00:25:20.545 | 99.99th=[ 169] 00:25:20.545 bw ( KiB/s): min= 512, max= 3024, per=3.94%, avg=990.30, stdev=548.17, samples=20 00:25:20.545 iops : min= 128, max= 756, avg=247.50, stdev=137.06, samples=20 00:25:20.545 lat (msec) : 20=8.97%, 50=24.69%, 100=54.93%, 250=11.42% 00:25:20.545 cpu : usr=41.72%, sys=1.90%, ctx=1621, majf=0, minf=9 00:25:20.545 IO depths : 1=0.1%, 2=1.8%, 4=7.4%, 8=75.7%, 16=15.1%, 32=0.0%, >=64=0.0% 00:25:20.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.545 complete : 0=0.0%, 4=89.1%, 8=9.3%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.545 issued rwts: total=2487,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.545 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:20.545 filename1: (groupid=0, jobs=1): err= 0: pid=83742: Wed Nov 27 06:20:24 2024 00:25:20.545 read: IOPS=246, BW=988KiB/s (1011kB/s)(9904KiB/10028msec) 00:25:20.545 slat (usec): min=4, max=8040, avg=41.44, stdev=382.74 00:25:20.545 clat (msec): min=13, max=143, avg=64.59, stdev=27.65 00:25:20.545 lat (msec): min=13, max=143, avg=64.63, stdev=27.64 00:25:20.545 clat percentiles (msec): 00:25:20.545 | 1.00th=[ 15], 5.00th=[ 16], 10.00th=[ 23], 20.00th=[ 40], 00:25:20.545 | 30.00th=[ 50], 40.00th=[ 59], 50.00th=[ 67], 60.00th=[ 73], 00:25:20.545 | 70.00th=[ 81], 80.00th=[ 86], 90.00th=[ 100], 95.00th=[ 113], 00:25:20.545 | 99.00th=[ 127], 99.50th=[ 134], 99.90th=[ 140], 99.95th=[ 140], 00:25:20.545 | 99.99th=[ 144] 00:25:20.545 bw ( KiB/s): min= 640, max= 2920, per=3.92%, avg=985.30, stdev=491.50, samples=20 00:25:20.545 iops : min= 160, max= 730, avg=246.20, stdev=122.92, samples=20 00:25:20.545 lat (msec) : 20=5.94%, 50=25.12%, 100=59.09%, 250=9.85% 00:25:20.545 cpu : usr=39.19%, sys=1.60%, ctx=1273, majf=0, minf=9 00:25:20.545 IO depths : 1=0.1%, 2=1.5%, 4=5.8%, 8=77.3%, 16=15.2%, 32=0.0%, >=64=0.0% 00:25:20.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.545 complete : 0=0.0%, 4=88.7%, 8=10.1%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.545 issued rwts: total=2476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.545 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:20.545 filename1: (groupid=0, jobs=1): err= 0: pid=83743: Wed Nov 27 06:20:24 2024 00:25:20.545 read: IOPS=253, BW=1014KiB/s (1039kB/s)(9.91MiB/10006msec) 00:25:20.545 slat (usec): min=4, max=7999, avg=42.08, stdev=294.33 00:25:20.545 clat (msec): min=3, max=163, avg=62.90, stdev=31.53 00:25:20.545 lat (msec): min=3, max=163, avg=62.94, stdev=31.52 00:25:20.545 clat percentiles (msec): 00:25:20.545 | 1.00th=[ 10], 5.00th=[ 15], 10.00th=[ 19], 20.00th=[ 28], 00:25:20.545 | 30.00th=[ 47], 40.00th=[ 55], 50.00th=[ 67], 60.00th=[ 73], 00:25:20.545 | 70.00th=[ 81], 80.00th=[ 90], 90.00th=[ 104], 95.00th=[ 115], 00:25:20.545 | 99.00th=[ 146], 99.50th=[ 150], 99.90th=[ 159], 99.95th=[ 163], 00:25:20.545 | 99.99th=[ 163] 00:25:20.545 bw ( KiB/s): min= 496, max= 2144, per=3.55%, avg=893.95, stdev=347.25, samples=19 00:25:20.545 iops : min= 124, max= 536, avg=223.42, stdev=86.78, samples=19 00:25:20.545 lat (msec) : 4=0.04%, 10=1.14%, 20=10.13%, 50=24.40%, 100=52.82% 00:25:20.545 lat (msec) : 250=11.47% 00:25:20.545 cpu : usr=44.54%, sys=1.94%, ctx=1281, majf=0, minf=9 00:25:20.545 IO depths : 1=0.1%, 2=2.0%, 4=8.2%, 8=74.9%, 16=14.8%, 32=0.0%, >=64=0.0% 00:25:20.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.545 complete : 0=0.0%, 4=89.3%, 8=8.9%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.545 issued rwts: total=2537,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.545 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:20.545 filename2: (groupid=0, jobs=1): err= 0: pid=83744: Wed Nov 27 06:20:24 2024 00:25:20.545 read: IOPS=302, BW=1208KiB/s (1237kB/s)(11.9MiB/10044msec) 00:25:20.545 slat (usec): min=5, max=7069, avg=23.05, stdev=164.59 00:25:20.545 clat (msec): min=2, max=147, avg=52.79, stdev=32.74 00:25:20.545 lat (msec): min=2, max=147, avg=52.81, stdev=32.74 00:25:20.545 clat percentiles (msec): 00:25:20.545 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 16], 00:25:20.545 | 30.00th=[ 27], 40.00th=[ 42], 50.00th=[ 57], 60.00th=[ 70], 00:25:20.545 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 95], 95.00th=[ 108], 00:25:20.545 | 99.00th=[ 125], 99.50th=[ 130], 99.90th=[ 134], 99.95th=[ 142], 00:25:20.545 | 99.99th=[ 148] 00:25:20.545 bw ( KiB/s): min= 608, max= 5709, per=4.80%, avg=1207.85, stdev=1128.71, samples=20 00:25:20.545 iops : min= 152, max= 1427, avg=301.95, stdev=282.12, samples=20 00:25:20.545 lat (msec) : 4=0.26%, 10=12.10%, 20=11.11%, 50=22.94%, 100=46.14% 00:25:20.545 lat (msec) : 250=7.45% 00:25:20.545 cpu : usr=39.15%, sys=1.80%, ctx=1054, majf=0, minf=9 00:25:20.545 IO depths : 1=0.1%, 2=2.2%, 4=8.9%, 8=73.5%, 16=15.3%, 32=0.0%, >=64=0.0% 00:25:20.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.545 complete : 0=0.0%, 4=90.0%, 8=8.1%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.545 issued rwts: total=3034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.545 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:20.545 filename2: (groupid=0, jobs=1): err= 0: pid=83745: Wed Nov 27 06:20:24 2024 00:25:20.545 read: IOPS=274, BW=1098KiB/s (1125kB/s)(10.7MiB/10003msec) 00:25:20.545 slat (usec): min=6, max=8058, avg=44.17, stdev=389.55 00:25:20.545 clat (msec): min=5, max=132, avg=58.07, stdev=28.29 00:25:20.545 lat (msec): min=5, max=132, avg=58.12, stdev=28.29 00:25:20.545 clat percentiles (msec): 00:25:20.545 | 1.00th=[ 10], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 26], 00:25:20.545 | 30.00th=[ 46], 40.00th=[ 50], 50.00th=[ 59], 60.00th=[ 68], 00:25:20.545 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 108], 00:25:20.545 | 99.00th=[ 121], 99.50th=[ 126], 99.90th=[ 132], 99.95th=[ 132], 00:25:20.545 | 99.99th=[ 132] 00:25:20.545 bw ( KiB/s): min= 624, max= 2256, per=3.87%, avg=974.63, stdev=334.40, samples=19 00:25:20.545 iops : min= 156, max= 564, avg=243.63, stdev=83.60, samples=19 00:25:20.545 lat (msec) : 10=1.31%, 20=10.41%, 50=29.27%, 100=51.07%, 250=7.94% 00:25:20.545 cpu : usr=38.65%, sys=1.44%, ctx=916, majf=0, minf=9 00:25:20.545 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:25:20.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.545 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.545 issued rwts: total=2747,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.545 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:20.545 filename2: (groupid=0, jobs=1): err= 0: pid=83746: Wed Nov 27 06:20:24 2024 00:25:20.545 read: IOPS=281, BW=1125KiB/s (1152kB/s)(11.0MiB/10050msec) 00:25:20.545 slat (usec): min=7, max=8039, avg=25.81, stdev=240.14 00:25:20.545 clat (msec): min=3, max=143, avg=56.68, stdev=30.86 00:25:20.545 lat (msec): min=3, max=143, avg=56.71, stdev=30.85 00:25:20.545 clat percentiles (msec): 00:25:20.545 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 24], 00:25:20.545 | 30.00th=[ 34], 40.00th=[ 50], 50.00th=[ 62], 60.00th=[ 71], 00:25:20.545 | 70.00th=[ 77], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 108], 00:25:20.545 | 99.00th=[ 124], 99.50th=[ 128], 99.90th=[ 131], 99.95th=[ 131], 00:25:20.545 | 99.99th=[ 144] 00:25:20.545 bw ( KiB/s): min= 664, max= 4560, per=4.47%, avg=1124.40, stdev=878.81, samples=20 00:25:20.545 iops : min= 166, max= 1140, avg=281.10, stdev=219.70, samples=20 00:25:20.545 lat (msec) : 4=0.57%, 10=0.14%, 20=16.84%, 50=23.81%, 100=50.73% 00:25:20.545 lat (msec) : 250=7.92% 00:25:20.545 cpu : usr=36.55%, sys=1.57%, ctx=1059, majf=0, minf=9 00:25:20.545 IO depths : 1=0.4%, 2=2.4%, 4=8.0%, 8=74.1%, 16=15.1%, 32=0.0%, >=64=0.0% 00:25:20.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.545 complete : 0=0.0%, 4=89.7%, 8=8.4%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.545 issued rwts: total=2827,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.545 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:20.545 filename2: (groupid=0, jobs=1): err= 0: pid=83747: Wed Nov 27 06:20:24 2024 00:25:20.545 read: IOPS=249, BW=998KiB/s (1022kB/s)(9.77MiB/10020msec) 00:25:20.545 slat (usec): min=4, max=8037, avg=28.08, stdev=211.31 00:25:20.545 clat (msec): min=11, max=143, avg=63.99, stdev=28.33 00:25:20.545 lat (msec): min=11, max=143, avg=64.02, stdev=28.33 00:25:20.545 clat percentiles (msec): 00:25:20.545 | 1.00th=[ 15], 5.00th=[ 18], 10.00th=[ 23], 20.00th=[ 36], 00:25:20.545 | 30.00th=[ 49], 40.00th=[ 59], 50.00th=[ 68], 60.00th=[ 73], 00:25:20.545 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 102], 95.00th=[ 112], 00:25:20.545 | 99.00th=[ 128], 99.50th=[ 138], 99.90th=[ 142], 99.95th=[ 142], 00:25:20.545 | 99.99th=[ 144] 00:25:20.545 bw ( KiB/s): min= 638, max= 2968, per=3.96%, avg=995.90, stdev=530.40, samples=20 00:25:20.545 iops : min= 159, max= 742, avg=248.95, stdev=132.62, samples=20 00:25:20.545 lat (msec) : 20=8.40%, 50=24.20%, 100=56.32%, 250=11.08% 00:25:20.545 cpu : usr=37.48%, sys=1.94%, ctx=1312, majf=0, minf=9 00:25:20.545 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=79.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:25:20.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.545 complete : 0=0.0%, 4=88.4%, 8=10.8%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.545 issued rwts: total=2500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.545 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:20.545 filename2: (groupid=0, jobs=1): err= 0: pid=83748: Wed Nov 27 06:20:24 2024 00:25:20.545 read: IOPS=265, BW=1063KiB/s (1089kB/s)(10.4MiB/10009msec) 00:25:20.545 slat (usec): min=3, max=8047, avg=35.28, stdev=247.73 00:25:20.545 clat (msec): min=5, max=132, avg=60.03, stdev=27.94 00:25:20.545 lat (msec): min=5, max=132, avg=60.06, stdev=27.94 00:25:20.545 clat percentiles (msec): 00:25:20.545 | 1.00th=[ 13], 5.00th=[ 16], 10.00th=[ 21], 20.00th=[ 31], 00:25:20.545 | 30.00th=[ 46], 40.00th=[ 53], 50.00th=[ 61], 60.00th=[ 70], 00:25:20.545 | 70.00th=[ 78], 80.00th=[ 84], 90.00th=[ 97], 95.00th=[ 107], 00:25:20.545 | 99.00th=[ 121], 99.50th=[ 130], 99.90th=[ 133], 99.95th=[ 133], 00:25:20.545 | 99.99th=[ 133] 00:25:20.545 bw ( KiB/s): min= 640, max= 2152, per=3.78%, avg=952.84, stdev=316.87, samples=19 00:25:20.545 iops : min= 160, max= 538, avg=238.21, stdev=79.22, samples=19 00:25:20.545 lat (msec) : 10=0.11%, 20=9.85%, 50=26.53%, 100=55.32%, 250=8.19% 00:25:20.546 cpu : usr=45.94%, sys=1.98%, ctx=1305, majf=0, minf=10 00:25:20.546 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=82.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:25:20.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.546 complete : 0=0.0%, 4=87.3%, 8=12.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.546 issued rwts: total=2661,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.546 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:20.546 filename2: (groupid=0, jobs=1): err= 0: pid=83749: Wed Nov 27 06:20:24 2024 00:25:20.546 read: IOPS=260, BW=1044KiB/s (1069kB/s)(10.2MiB/10007msec) 00:25:20.546 slat (usec): min=3, max=8065, avg=49.59, stdev=429.29 00:25:20.546 clat (msec): min=10, max=131, avg=61.15, stdev=27.21 00:25:20.546 lat (msec): min=10, max=131, avg=61.19, stdev=27.21 00:25:20.546 clat percentiles (msec): 00:25:20.546 | 1.00th=[ 15], 5.00th=[ 18], 10.00th=[ 23], 20.00th=[ 35], 00:25:20.546 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 62], 60.00th=[ 72], 00:25:20.546 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:25:20.546 | 99.00th=[ 121], 99.50th=[ 126], 99.90th=[ 132], 99.95th=[ 132], 00:25:20.546 | 99.99th=[ 132] 00:25:20.546 bw ( KiB/s): min= 664, max= 1936, per=3.73%, avg=939.79, stdev=263.41, samples=19 00:25:20.546 iops : min= 166, max= 484, avg=234.95, stdev=65.85, samples=19 00:25:20.546 lat (msec) : 20=8.89%, 50=27.88%, 100=54.88%, 250=8.35% 00:25:20.546 cpu : usr=36.74%, sys=1.50%, ctx=997, majf=0, minf=9 00:25:20.546 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.5%, 16=16.0%, 32=0.0%, >=64=0.0% 00:25:20.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.546 complete : 0=0.0%, 4=87.6%, 8=12.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.546 issued rwts: total=2611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.546 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:20.546 filename2: (groupid=0, jobs=1): err= 0: pid=83750: Wed Nov 27 06:20:24 2024 00:25:20.546 read: IOPS=253, BW=1013KiB/s (1038kB/s)(9.92MiB/10021msec) 00:25:20.546 slat (usec): min=4, max=8060, avg=38.47, stdev=365.28 00:25:20.546 clat (msec): min=13, max=140, avg=62.98, stdev=26.56 00:25:20.546 lat (msec): min=13, max=140, avg=63.02, stdev=26.56 00:25:20.546 clat percentiles (msec): 00:25:20.546 | 1.00th=[ 15], 5.00th=[ 19], 10.00th=[ 23], 20.00th=[ 40], 00:25:20.546 | 30.00th=[ 48], 40.00th=[ 57], 50.00th=[ 64], 60.00th=[ 72], 00:25:20.546 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 97], 95.00th=[ 108], 00:25:20.546 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 138], 99.95th=[ 140], 00:25:20.546 | 99.99th=[ 140] 00:25:20.546 bw ( KiB/s): min= 640, max= 3024, per=4.02%, avg=1011.40, stdev=498.72, samples=20 00:25:20.546 iops : min= 160, max= 756, avg=252.85, stdev=124.68, samples=20 00:25:20.546 lat (msec) : 20=6.50%, 50=27.06%, 100=57.78%, 250=8.66% 00:25:20.546 cpu : usr=35.27%, sys=1.59%, ctx=1130, majf=0, minf=9 00:25:20.546 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.2%, 16=16.4%, 32=0.0%, >=64=0.0% 00:25:20.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.546 complete : 0=0.0%, 4=88.0%, 8=11.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.546 issued rwts: total=2539,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.546 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:20.546 filename2: (groupid=0, jobs=1): err= 0: pid=83751: Wed Nov 27 06:20:24 2024 00:25:20.546 read: IOPS=259, BW=1039KiB/s (1064kB/s)(10.1MiB/10003msec) 00:25:20.546 slat (usec): min=7, max=4093, avg=35.21, stdev=224.36 00:25:20.546 clat (msec): min=5, max=162, avg=61.44, stdev=30.85 00:25:20.546 lat (msec): min=5, max=162, avg=61.48, stdev=30.84 00:25:20.546 clat percentiles (msec): 00:25:20.546 | 1.00th=[ 10], 5.00th=[ 16], 10.00th=[ 18], 20.00th=[ 27], 00:25:20.546 | 30.00th=[ 46], 40.00th=[ 54], 50.00th=[ 64], 60.00th=[ 72], 00:25:20.546 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 99], 95.00th=[ 112], 00:25:20.546 | 99.00th=[ 148], 99.50th=[ 153], 99.90th=[ 157], 99.95th=[ 163], 00:25:20.546 | 99.99th=[ 163] 00:25:20.546 bw ( KiB/s): min= 528, max= 2200, per=3.65%, avg=919.47, stdev=349.52, samples=19 00:25:20.546 iops : min= 132, max= 550, avg=229.84, stdev=87.38, samples=19 00:25:20.546 lat (msec) : 10=1.35%, 20=11.09%, 50=23.17%, 100=55.54%, 250=8.85% 00:25:20.546 cpu : usr=42.92%, sys=1.87%, ctx=1294, majf=0, minf=9 00:25:20.546 IO depths : 1=0.1%, 2=1.4%, 4=5.5%, 8=77.9%, 16=15.2%, 32=0.0%, >=64=0.0% 00:25:20.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.546 complete : 0=0.0%, 4=88.5%, 8=10.3%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.546 issued rwts: total=2598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.546 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:20.546 00:25:20.546 Run status group 0 (all jobs): 00:25:20.546 READ: bw=24.6MiB/s (25.8MB/s), 940KiB/s-1208KiB/s (962kB/s-1237kB/s), io=248MiB (260MB), run=10001-10075msec 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:20.546 bdev_null0 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:20.546 [2024-11-27 06:20:25.295871] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:25:20.546 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:20.547 bdev_null1 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:20.547 { 00:25:20.547 "params": { 00:25:20.547 "name": "Nvme$subsystem", 00:25:20.547 "trtype": "$TEST_TRANSPORT", 00:25:20.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:20.547 "adrfam": "ipv4", 00:25:20.547 "trsvcid": "$NVMF_PORT", 00:25:20.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:20.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:20.547 "hdgst": ${hdgst:-false}, 00:25:20.547 "ddgst": ${ddgst:-false} 00:25:20.547 }, 00:25:20.547 "method": "bdev_nvme_attach_controller" 00:25:20.547 } 00:25:20.547 EOF 00:25:20.547 )") 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:20.547 { 00:25:20.547 "params": { 00:25:20.547 "name": "Nvme$subsystem", 00:25:20.547 "trtype": "$TEST_TRANSPORT", 00:25:20.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:20.547 "adrfam": "ipv4", 00:25:20.547 "trsvcid": "$NVMF_PORT", 00:25:20.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:20.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:20.547 "hdgst": ${hdgst:-false}, 00:25:20.547 "ddgst": ${ddgst:-false} 00:25:20.547 }, 00:25:20.547 "method": "bdev_nvme_attach_controller" 00:25:20.547 } 00:25:20.547 EOF 00:25:20.547 )") 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:20.547 "params": { 00:25:20.547 "name": "Nvme0", 00:25:20.547 "trtype": "tcp", 00:25:20.547 "traddr": "10.0.0.3", 00:25:20.547 "adrfam": "ipv4", 00:25:20.547 "trsvcid": "4420", 00:25:20.547 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:20.547 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:20.547 "hdgst": false, 00:25:20.547 "ddgst": false 00:25:20.547 }, 00:25:20.547 "method": "bdev_nvme_attach_controller" 00:25:20.547 },{ 00:25:20.547 "params": { 00:25:20.547 "name": "Nvme1", 00:25:20.547 "trtype": "tcp", 00:25:20.547 "traddr": "10.0.0.3", 00:25:20.547 "adrfam": "ipv4", 00:25:20.547 "trsvcid": "4420", 00:25:20.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:20.547 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:20.547 "hdgst": false, 00:25:20.547 "ddgst": false 00:25:20.547 }, 00:25:20.547 "method": "bdev_nvme_attach_controller" 00:25:20.547 }' 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:20.547 06:20:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:20.547 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:20.547 ... 00:25:20.547 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:20.547 ... 00:25:20.547 fio-3.35 00:25:20.547 Starting 4 threads 00:25:27.111 00:25:27.111 filename0: (groupid=0, jobs=1): err= 0: pid=83898: Wed Nov 27 06:20:31 2024 00:25:27.111 read: IOPS=2035, BW=15.9MiB/s (16.7MB/s)(79.6MiB/5003msec) 00:25:27.112 slat (usec): min=5, max=120, avg=18.15, stdev=11.67 00:25:27.112 clat (usec): min=1129, max=7937, avg=3877.28, stdev=1104.75 00:25:27.112 lat (usec): min=1137, max=7945, avg=3895.43, stdev=1104.84 00:25:27.112 clat percentiles (usec): 00:25:27.112 | 1.00th=[ 1663], 5.00th=[ 2147], 10.00th=[ 2343], 20.00th=[ 2671], 00:25:27.112 | 30.00th=[ 3032], 40.00th=[ 3523], 50.00th=[ 4146], 60.00th=[ 4424], 00:25:27.112 | 70.00th=[ 4686], 80.00th=[ 4948], 90.00th=[ 5211], 95.00th=[ 5342], 00:25:27.112 | 99.00th=[ 5669], 99.50th=[ 5866], 99.90th=[ 6521], 99.95th=[ 6783], 00:25:27.112 | 99.99th=[ 7177] 00:25:27.112 bw ( KiB/s): min=13952, max=18443, per=25.44%, avg=16383.44, stdev=1408.22, samples=9 00:25:27.112 iops : min= 1744, max= 2305, avg=2047.89, stdev=175.96, samples=9 00:25:27.112 lat (msec) : 2=2.97%, 4=42.97%, 10=54.05% 00:25:27.112 cpu : usr=93.60%, sys=5.22%, ctx=10, majf=0, minf=0 00:25:27.112 IO depths : 1=0.5%, 2=5.9%, 4=60.7%, 8=32.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:27.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.112 complete : 0=0.0%, 4=97.8%, 8=2.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.112 issued rwts: total=10185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.112 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:27.112 filename0: (groupid=0, jobs=1): err= 0: pid=83899: Wed Nov 27 06:20:31 2024 00:25:27.112 read: IOPS=2110, BW=16.5MiB/s (17.3MB/s)(82.5MiB/5002msec) 00:25:27.112 slat (usec): min=4, max=121, avg=20.84, stdev=11.43 00:25:27.112 clat (usec): min=493, max=7993, avg=3736.06, stdev=1125.25 00:25:27.112 lat (usec): min=506, max=8008, avg=3756.90, stdev=1124.78 00:25:27.112 clat percentiles (usec): 00:25:27.112 | 1.00th=[ 1696], 5.00th=[ 2008], 10.00th=[ 2311], 20.00th=[ 2606], 00:25:27.112 | 30.00th=[ 2900], 40.00th=[ 3228], 50.00th=[ 3785], 60.00th=[ 4228], 00:25:27.112 | 70.00th=[ 4555], 80.00th=[ 4948], 90.00th=[ 5145], 95.00th=[ 5342], 00:25:27.112 | 99.00th=[ 5866], 99.50th=[ 5997], 99.90th=[ 6325], 99.95th=[ 6521], 00:25:27.112 | 99.99th=[ 7308] 00:25:27.112 bw ( KiB/s): min=14864, max=19056, per=26.57%, avg=17107.11, stdev=1251.17, samples=9 00:25:27.112 iops : min= 1858, max= 2382, avg=2138.33, stdev=156.37, samples=9 00:25:27.112 lat (usec) : 500=0.01% 00:25:27.112 lat (msec) : 2=4.95%, 4=48.59%, 10=46.45% 00:25:27.112 cpu : usr=94.44%, sys=4.46%, ctx=7, majf=0, minf=0 00:25:27.112 IO depths : 1=0.2%, 2=3.9%, 4=61.7%, 8=34.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:27.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.112 complete : 0=0.0%, 4=98.5%, 8=1.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.112 issued rwts: total=10558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.112 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:27.112 filename1: (groupid=0, jobs=1): err= 0: pid=83900: Wed Nov 27 06:20:31 2024 00:25:27.112 read: IOPS=2072, BW=16.2MiB/s (17.0MB/s)(81.0MiB/5003msec) 00:25:27.112 slat (usec): min=4, max=206, avg=20.93, stdev=11.68 00:25:27.112 clat (usec): min=636, max=7347, avg=3803.85, stdev=1163.14 00:25:27.112 lat (usec): min=651, max=7362, avg=3824.79, stdev=1162.44 00:25:27.112 clat percentiles (usec): 00:25:27.112 | 1.00th=[ 1647], 5.00th=[ 2040], 10.00th=[ 2343], 20.00th=[ 2606], 00:25:27.112 | 30.00th=[ 2900], 40.00th=[ 3261], 50.00th=[ 3982], 60.00th=[ 4359], 00:25:27.112 | 70.00th=[ 4621], 80.00th=[ 4948], 90.00th=[ 5211], 95.00th=[ 5407], 00:25:27.112 | 99.00th=[ 6063], 99.50th=[ 6325], 99.90th=[ 7111], 99.95th=[ 7242], 00:25:27.112 | 99.99th=[ 7308] 00:25:27.112 bw ( KiB/s): min=14880, max=19696, per=26.41%, avg=17002.44, stdev=1424.22, samples=9 00:25:27.112 iops : min= 1860, max= 2462, avg=2125.22, stdev=178.05, samples=9 00:25:27.112 lat (usec) : 750=0.01%, 1000=0.03% 00:25:27.112 lat (msec) : 2=4.73%, 4=45.39%, 10=49.84% 00:25:27.112 cpu : usr=92.96%, sys=5.52%, ctx=51, majf=0, minf=9 00:25:27.112 IO depths : 1=0.2%, 2=5.2%, 4=61.0%, 8=33.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:27.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.112 complete : 0=0.0%, 4=98.0%, 8=2.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.112 issued rwts: total=10367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.112 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:27.112 filename1: (groupid=0, jobs=1): err= 0: pid=83901: Wed Nov 27 06:20:31 2024 00:25:27.112 read: IOPS=1831, BW=14.3MiB/s (15.0MB/s)(71.6MiB/5004msec) 00:25:27.112 slat (usec): min=4, max=152, avg=18.68, stdev=11.86 00:25:27.112 clat (usec): min=1107, max=8034, avg=4305.50, stdev=1081.22 00:25:27.112 lat (usec): min=1115, max=8057, avg=4324.18, stdev=1079.66 00:25:27.112 clat percentiles (usec): 00:25:27.112 | 1.00th=[ 1893], 5.00th=[ 2311], 10.00th=[ 2540], 20.00th=[ 3195], 00:25:27.112 | 30.00th=[ 4047], 40.00th=[ 4293], 50.00th=[ 4555], 60.00th=[ 4752], 00:25:27.112 | 70.00th=[ 5014], 80.00th=[ 5276], 90.00th=[ 5407], 95.00th=[ 5538], 00:25:27.112 | 99.00th=[ 6587], 99.50th=[ 6652], 99.90th=[ 7177], 99.95th=[ 7177], 00:25:27.112 | 99.99th=[ 8029] 00:25:27.112 bw ( KiB/s): min=11776, max=17840, per=22.57%, avg=14530.11, stdev=2107.25, samples=9 00:25:27.112 iops : min= 1472, max= 2230, avg=1816.22, stdev=263.41, samples=9 00:25:27.112 lat (msec) : 2=1.40%, 4=27.50%, 10=71.10% 00:25:27.112 cpu : usr=93.78%, sys=5.08%, ctx=31, majf=0, minf=0 00:25:27.112 IO depths : 1=0.6%, 2=13.7%, 4=56.4%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:27.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.112 complete : 0=0.0%, 4=94.7%, 8=5.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.112 issued rwts: total=9164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.112 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:27.112 00:25:27.112 Run status group 0 (all jobs): 00:25:27.112 READ: bw=62.9MiB/s (65.9MB/s), 14.3MiB/s-16.5MiB/s (15.0MB/s-17.3MB/s), io=315MiB (330MB), run=5002-5004msec 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:27.112 ************************************ 00:25:27.112 END TEST fio_dif_rand_params 00:25:27.112 ************************************ 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.112 00:25:27.112 real 0m25.077s 00:25:27.112 user 2m16.128s 00:25:27.112 sys 0m6.987s 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:27.112 06:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:27.112 06:20:31 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:25:27.112 06:20:31 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:27.112 06:20:31 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:27.112 06:20:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:27.112 ************************************ 00:25:27.112 START TEST fio_dif_digest 00:25:27.112 ************************************ 00:25:27.112 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:25:27.112 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:25:27.112 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:25:27.112 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:25:27.112 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:25:27.112 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:25:27.112 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:25:27.112 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:25:27.112 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:25:27.112 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:25:27.112 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:25:27.112 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:25:27.112 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:25:27.112 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:25:27.112 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:25:27.112 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:25:27.112 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:27.112 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.112 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:27.112 bdev_null0 00:25:27.112 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.112 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:27.113 [2024-11-27 06:20:31.586289] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:27.113 { 00:25:27.113 "params": { 00:25:27.113 "name": "Nvme$subsystem", 00:25:27.113 "trtype": "$TEST_TRANSPORT", 00:25:27.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.113 "adrfam": "ipv4", 00:25:27.113 "trsvcid": "$NVMF_PORT", 00:25:27.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.113 "hdgst": ${hdgst:-false}, 00:25:27.113 "ddgst": ${ddgst:-false} 00:25:27.113 }, 00:25:27.113 "method": "bdev_nvme_attach_controller" 00:25:27.113 } 00:25:27.113 EOF 00:25:27.113 )") 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:27.113 "params": { 00:25:27.113 "name": "Nvme0", 00:25:27.113 "trtype": "tcp", 00:25:27.113 "traddr": "10.0.0.3", 00:25:27.113 "adrfam": "ipv4", 00:25:27.113 "trsvcid": "4420", 00:25:27.113 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:27.113 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:27.113 "hdgst": true, 00:25:27.113 "ddgst": true 00:25:27.113 }, 00:25:27.113 "method": "bdev_nvme_attach_controller" 00:25:27.113 }' 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:27.113 06:20:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:27.113 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:27.113 ... 00:25:27.113 fio-3.35 00:25:27.113 Starting 3 threads 00:25:39.325 00:25:39.325 filename0: (groupid=0, jobs=1): err= 0: pid=84007: Wed Nov 27 06:20:42 2024 00:25:39.325 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(261MiB/10007msec) 00:25:39.325 slat (nsec): min=7015, max=91105, avg=17927.46, stdev=9668.09 00:25:39.325 clat (usec): min=9604, max=16206, avg=14353.97, stdev=951.33 00:25:39.325 lat (usec): min=9618, max=16221, avg=14371.89, stdev=950.92 00:25:39.325 clat percentiles (usec): 00:25:39.325 | 1.00th=[11731], 5.00th=[12125], 10.00th=[12649], 20.00th=[14091], 00:25:39.325 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14615], 60.00th=[14746], 00:25:39.325 | 70.00th=[14877], 80.00th=[15008], 90.00th=[15270], 95.00th=[15401], 00:25:39.325 | 99.00th=[15795], 99.50th=[15926], 99.90th=[16188], 99.95th=[16188], 00:25:39.325 | 99.99th=[16188] 00:25:39.325 bw ( KiB/s): min=25293, max=30720, per=33.36%, avg=26675.21, stdev=1513.64, samples=19 00:25:39.325 iops : min= 197, max= 240, avg=208.37, stdev=11.86, samples=19 00:25:39.325 lat (msec) : 10=0.14%, 20=99.86% 00:25:39.325 cpu : usr=94.43%, sys=4.57%, ctx=114, majf=0, minf=0 00:25:39.325 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:39.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:39.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:39.325 issued rwts: total=2085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:39.325 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:39.325 filename0: (groupid=0, jobs=1): err= 0: pid=84008: Wed Nov 27 06:20:42 2024 00:25:39.325 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(260MiB/10001msec) 00:25:39.325 slat (nsec): min=6423, max=68252, avg=14677.48, stdev=9330.96 00:25:39.325 clat (usec): min=11526, max=18553, avg=14373.18, stdev=939.51 00:25:39.325 lat (usec): min=11535, max=18578, avg=14387.86, stdev=939.63 00:25:39.325 clat percentiles (usec): 00:25:39.325 | 1.00th=[11731], 5.00th=[12256], 10.00th=[12649], 20.00th=[14091], 00:25:39.325 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14615], 60.00th=[14746], 00:25:39.325 | 70.00th=[14877], 80.00th=[15008], 90.00th=[15270], 95.00th=[15401], 00:25:39.325 | 99.00th=[15795], 99.50th=[15926], 99.90th=[18482], 99.95th=[18482], 00:25:39.325 | 99.99th=[18482] 00:25:39.325 bw ( KiB/s): min=25344, max=30720, per=33.36%, avg=26677.89, stdev=1532.63, samples=19 00:25:39.325 iops : min= 198, max= 240, avg=208.42, stdev=11.97, samples=19 00:25:39.325 lat (msec) : 20=100.00% 00:25:39.325 cpu : usr=92.61%, sys=5.74%, ctx=25, majf=0, minf=0 00:25:39.325 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:39.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:39.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:39.325 issued rwts: total=2082,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:39.325 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:39.325 filename0: (groupid=0, jobs=1): err= 0: pid=84009: Wed Nov 27 06:20:42 2024 00:25:39.325 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(261MiB/10006msec) 00:25:39.325 slat (usec): min=6, max=113, avg=18.40, stdev=10.20 00:25:39.325 clat (usec): min=9601, max=16076, avg=14352.24, stdev=950.54 00:25:39.325 lat (usec): min=9615, max=16093, avg=14370.64, stdev=950.80 00:25:39.325 clat percentiles (usec): 00:25:39.325 | 1.00th=[11731], 5.00th=[12125], 10.00th=[12649], 20.00th=[14091], 00:25:39.325 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14615], 60.00th=[14746], 00:25:39.325 | 70.00th=[14877], 80.00th=[15008], 90.00th=[15270], 95.00th=[15401], 00:25:39.325 | 99.00th=[15926], 99.50th=[15926], 99.90th=[16057], 99.95th=[16057], 00:25:39.325 | 99.99th=[16057] 00:25:39.325 bw ( KiB/s): min=25344, max=30720, per=33.36%, avg=26677.89, stdev=1511.10, samples=19 00:25:39.325 iops : min= 198, max= 240, avg=208.42, stdev=11.81, samples=19 00:25:39.325 lat (msec) : 10=0.14%, 20=99.86% 00:25:39.325 cpu : usr=95.35%, sys=3.94%, ctx=14, majf=0, minf=0 00:25:39.325 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:39.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:39.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:39.325 issued rwts: total=2085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:39.325 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:39.325 00:25:39.325 Run status group 0 (all jobs): 00:25:39.325 READ: bw=78.1MiB/s (81.9MB/s), 26.0MiB/s-26.0MiB/s (27.3MB/s-27.3MB/s), io=782MiB (819MB), run=10001-10007msec 00:25:39.325 06:20:42 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:25:39.325 06:20:42 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:25:39.325 06:20:42 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:25:39.325 06:20:42 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:39.325 06:20:42 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:25:39.325 06:20:42 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:39.325 06:20:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.325 06:20:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:39.325 06:20:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.325 06:20:42 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:39.325 06:20:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.325 06:20:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:39.325 06:20:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.325 ************************************ 00:25:39.325 END TEST fio_dif_digest 00:25:39.325 ************************************ 00:25:39.325 00:25:39.325 real 0m11.119s 00:25:39.325 user 0m28.957s 00:25:39.325 sys 0m1.754s 00:25:39.325 06:20:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:39.325 06:20:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:39.325 06:20:42 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:25:39.325 06:20:42 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:25:39.325 06:20:42 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:39.325 06:20:42 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:25:39.325 06:20:42 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:39.325 06:20:42 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:25:39.325 06:20:42 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:39.325 06:20:42 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:39.325 rmmod nvme_tcp 00:25:39.325 rmmod nvme_fabrics 00:25:39.326 rmmod nvme_keyring 00:25:39.326 06:20:42 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:39.326 06:20:42 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:25:39.326 06:20:42 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:25:39.326 06:20:42 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 83251 ']' 00:25:39.326 06:20:42 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 83251 00:25:39.326 06:20:42 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 83251 ']' 00:25:39.326 06:20:42 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 83251 00:25:39.326 06:20:42 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:25:39.326 06:20:42 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:39.326 06:20:42 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83251 00:25:39.326 killing process with pid 83251 00:25:39.326 06:20:42 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:39.326 06:20:42 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:39.326 06:20:42 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83251' 00:25:39.326 06:20:42 nvmf_dif -- common/autotest_common.sh@973 -- # kill 83251 00:25:39.326 06:20:42 nvmf_dif -- common/autotest_common.sh@978 -- # wait 83251 00:25:39.326 06:20:43 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:25:39.326 06:20:43 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:39.326 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:39.326 Waiting for block devices as requested 00:25:39.326 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:39.326 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:39.326 06:20:43 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:39.326 06:20:43 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:39.326 06:20:43 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:25:39.326 06:20:43 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:25:39.326 06:20:43 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:39.326 06:20:43 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:25:39.326 06:20:43 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:39.326 06:20:43 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:39.326 06:20:43 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:39.326 06:20:43 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:39.326 06:20:43 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:39.326 06:20:43 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:39.326 06:20:43 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:39.326 06:20:43 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:39.326 06:20:43 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:39.326 06:20:43 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:39.326 06:20:43 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:39.326 06:20:43 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:39.326 06:20:44 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:39.326 06:20:44 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:39.326 06:20:44 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:39.326 06:20:44 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:39.326 06:20:44 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.326 06:20:44 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:39.326 06:20:44 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.326 06:20:44 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:25:39.326 00:25:39.326 real 1m1.521s 00:25:39.326 user 4m1.982s 00:25:39.326 sys 0m17.682s 00:25:39.326 06:20:44 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:39.326 ************************************ 00:25:39.326 END TEST nvmf_dif 00:25:39.326 ************************************ 00:25:39.326 06:20:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:39.326 06:20:44 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:25:39.326 06:20:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:39.326 06:20:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:39.326 06:20:44 -- common/autotest_common.sh@10 -- # set +x 00:25:39.326 ************************************ 00:25:39.326 START TEST nvmf_abort_qd_sizes 00:25:39.326 ************************************ 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:25:39.326 * Looking for test storage... 00:25:39.326 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:39.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.326 --rc genhtml_branch_coverage=1 00:25:39.326 --rc genhtml_function_coverage=1 00:25:39.326 --rc genhtml_legend=1 00:25:39.326 --rc geninfo_all_blocks=1 00:25:39.326 --rc geninfo_unexecuted_blocks=1 00:25:39.326 00:25:39.326 ' 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:39.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.326 --rc genhtml_branch_coverage=1 00:25:39.326 --rc genhtml_function_coverage=1 00:25:39.326 --rc genhtml_legend=1 00:25:39.326 --rc geninfo_all_blocks=1 00:25:39.326 --rc geninfo_unexecuted_blocks=1 00:25:39.326 00:25:39.326 ' 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:39.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.326 --rc genhtml_branch_coverage=1 00:25:39.326 --rc genhtml_function_coverage=1 00:25:39.326 --rc genhtml_legend=1 00:25:39.326 --rc geninfo_all_blocks=1 00:25:39.326 --rc geninfo_unexecuted_blocks=1 00:25:39.326 00:25:39.326 ' 00:25:39.326 06:20:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:39.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.326 --rc genhtml_branch_coverage=1 00:25:39.326 --rc genhtml_function_coverage=1 00:25:39.326 --rc genhtml_legend=1 00:25:39.326 --rc geninfo_all_blocks=1 00:25:39.326 --rc geninfo_unexecuted_blocks=1 00:25:39.326 00:25:39.326 ' 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:39.327 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:39.327 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:39.587 Cannot find device "nvmf_init_br" 00:25:39.587 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:25:39.587 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:39.587 Cannot find device "nvmf_init_br2" 00:25:39.587 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:25:39.587 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:39.587 Cannot find device "nvmf_tgt_br" 00:25:39.587 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:25:39.587 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:39.587 Cannot find device "nvmf_tgt_br2" 00:25:39.587 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:25:39.587 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:39.587 Cannot find device "nvmf_init_br" 00:25:39.587 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:25:39.587 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:39.587 Cannot find device "nvmf_init_br2" 00:25:39.587 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:25:39.587 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:39.587 Cannot find device "nvmf_tgt_br" 00:25:39.587 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:25:39.587 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:39.587 Cannot find device "nvmf_tgt_br2" 00:25:39.587 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:25:39.587 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:39.587 Cannot find device "nvmf_br" 00:25:39.587 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:25:39.587 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:39.587 Cannot find device "nvmf_init_if" 00:25:39.587 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:25:39.587 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:39.587 Cannot find device "nvmf_init_if2" 00:25:39.587 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:39.588 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:39.588 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:39.588 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:39.848 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:39.848 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:39.848 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:39.848 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:39.848 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:39.848 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:39.848 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:39.848 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:39.848 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:39.848 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:39.848 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:39.848 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:39.848 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:39.848 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:25:39.848 00:25:39.848 --- 10.0.0.3 ping statistics --- 00:25:39.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.848 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:25:39.848 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:39.848 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:39.848 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:25:39.848 00:25:39.848 --- 10.0.0.4 ping statistics --- 00:25:39.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.848 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:25:39.848 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:39.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:25:39.848 00:25:39.848 --- 10.0.0.1 ping statistics --- 00:25:39.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.848 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:25:39.848 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:39.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:25:39.848 00:25:39.848 --- 10.0.0.2 ping statistics --- 00:25:39.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.849 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:25:39.849 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.849 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:25:39.849 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:25:39.849 06:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:40.417 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:40.676 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:40.676 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:40.676 06:20:45 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.676 06:20:45 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:40.676 06:20:45 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:40.676 06:20:45 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.676 06:20:45 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:40.676 06:20:45 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:40.676 06:20:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:25:40.676 06:20:45 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:40.676 06:20:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:40.676 06:20:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:40.676 06:20:45 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84656 00:25:40.676 06:20:45 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:25:40.676 06:20:45 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84656 00:25:40.676 06:20:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84656 ']' 00:25:40.676 06:20:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.676 06:20:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:40.676 06:20:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.676 06:20:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:40.676 06:20:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:40.676 [2024-11-27 06:20:45.766870] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:25:40.676 [2024-11-27 06:20:45.767222] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.935 [2024-11-27 06:20:45.926201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:40.935 [2024-11-27 06:20:46.010718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.935 [2024-11-27 06:20:46.011010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.935 [2024-11-27 06:20:46.011201] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.935 [2024-11-27 06:20:46.011347] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.935 [2024-11-27 06:20:46.011398] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.935 [2024-11-27 06:20:46.013144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.935 [2024-11-27 06:20:46.013267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:40.935 [2024-11-27 06:20:46.013957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:40.935 [2024-11-27 06:20:46.014002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.193 [2024-11-27 06:20:46.100470] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:41.761 06:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:41.761 06:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:25:41.761 06:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:41.761 06:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:41.761 06:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:42.020 06:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.020 06:20:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:25:42.020 06:20:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:25:42.020 06:20:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:25:42.020 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:25:42.020 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:25:42.020 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:25:42.020 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:25:42.020 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:25:42.020 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:25:42.020 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:25:42.020 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:25:42.020 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:25:42.020 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:25:42.020 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:25:42.020 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:25:42.020 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:25:42.020 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:25:42.020 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:25:42.020 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:25:42.020 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:25:42.020 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:42.021 06:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:42.021 ************************************ 00:25:42.021 START TEST spdk_target_abort 00:25:42.021 ************************************ 00:25:42.021 06:20:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:25:42.021 06:20:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:25:42.021 06:20:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:25:42.021 06:20:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.021 06:20:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:42.021 spdk_targetn1 00:25:42.021 06:20:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.021 06:20:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:42.021 06:20:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.021 06:20:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:42.021 [2024-11-27 06:20:46.998113] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:42.021 [2024-11-27 06:20:47.035682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:42.021 06:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:45.305 Initializing NVMe Controllers 00:25:45.305 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:25:45.305 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:45.306 Initialization complete. Launching workers. 00:25:45.306 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8527, failed: 0 00:25:45.306 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1060, failed to submit 7467 00:25:45.306 success 854, unsuccessful 206, failed 0 00:25:45.306 06:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:45.306 06:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:49.505 Initializing NVMe Controllers 00:25:49.505 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:25:49.505 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:49.505 Initialization complete. Launching workers. 00:25:49.505 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8954, failed: 0 00:25:49.505 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1199, failed to submit 7755 00:25:49.505 success 361, unsuccessful 838, failed 0 00:25:49.505 06:20:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:49.505 06:20:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:52.053 Initializing NVMe Controllers 00:25:52.053 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:25:52.053 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:52.053 Initialization complete. Launching workers. 00:25:52.053 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29995, failed: 0 00:25:52.053 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2312, failed to submit 27683 00:25:52.053 success 395, unsuccessful 1917, failed 0 00:25:52.053 06:20:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:25:52.053 06:20:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.053 06:20:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:52.053 06:20:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.053 06:20:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:25:52.053 06:20:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.053 06:20:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:52.987 06:20:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.987 06:20:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84656 00:25:52.987 06:20:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84656 ']' 00:25:52.987 06:20:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84656 00:25:52.987 06:20:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:25:52.987 06:20:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:52.987 06:20:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84656 00:25:52.987 killing process with pid 84656 00:25:52.987 06:20:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:52.987 06:20:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:52.988 06:20:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84656' 00:25:52.988 06:20:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84656 00:25:52.988 06:20:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84656 00:25:53.246 00:25:53.246 real 0m11.409s 00:25:53.246 user 0m46.655s 00:25:53.246 sys 0m1.967s 00:25:53.246 06:20:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:53.246 06:20:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:53.246 ************************************ 00:25:53.246 END TEST spdk_target_abort 00:25:53.246 ************************************ 00:25:53.504 06:20:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:25:53.504 06:20:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:53.504 06:20:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:53.504 06:20:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:53.504 ************************************ 00:25:53.504 START TEST kernel_target_abort 00:25:53.504 ************************************ 00:25:53.504 06:20:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:25:53.504 06:20:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:25:53.504 06:20:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:25:53.504 06:20:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.504 06:20:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.504 06:20:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.504 06:20:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.504 06:20:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.504 06:20:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.504 06:20:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.504 06:20:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.504 06:20:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.504 06:20:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:53.504 06:20:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:53.504 06:20:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:53.504 06:20:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:53.504 06:20:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:53.504 06:20:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:53.504 06:20:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:25:53.504 06:20:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:53.504 06:20:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:53.504 06:20:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:53.504 06:20:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:53.763 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:53.763 Waiting for block devices as requested 00:25:54.022 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:54.022 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:54.022 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:54.022 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:54.022 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:54.022 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:54.022 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:54.022 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:54.022 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:54.022 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:54.022 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:54.022 No valid GPT data, bailing 00:25:54.022 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:54.022 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:25:54.022 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:25:54.022 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:54.295 No valid GPT data, bailing 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:54.295 No valid GPT data, bailing 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:54.295 No valid GPT data, bailing 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:54.295 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 --hostid=34bde053-797d-42f4-ad97-2a3b315837d0 -a 10.0.0.1 -t tcp -s 4420 00:25:54.560 00:25:54.560 Discovery Log Number of Records 2, Generation counter 2 00:25:54.560 =====Discovery Log Entry 0====== 00:25:54.560 trtype: tcp 00:25:54.560 adrfam: ipv4 00:25:54.560 subtype: current discovery subsystem 00:25:54.560 treq: not specified, sq flow control disable supported 00:25:54.560 portid: 1 00:25:54.560 trsvcid: 4420 00:25:54.560 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:54.561 traddr: 10.0.0.1 00:25:54.561 eflags: none 00:25:54.561 sectype: none 00:25:54.561 =====Discovery Log Entry 1====== 00:25:54.561 trtype: tcp 00:25:54.561 adrfam: ipv4 00:25:54.561 subtype: nvme subsystem 00:25:54.561 treq: not specified, sq flow control disable supported 00:25:54.561 portid: 1 00:25:54.561 trsvcid: 4420 00:25:54.561 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:54.561 traddr: 10.0.0.1 00:25:54.561 eflags: none 00:25:54.561 sectype: none 00:25:54.561 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:25:54.561 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:25:54.561 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:25:54.561 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:25:54.561 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:25:54.561 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:25:54.561 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:25:54.561 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:25:54.561 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:25:54.561 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:54.561 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:25:54.561 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:54.561 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:25:54.561 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:54.561 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:25:54.561 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:54.561 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:25:54.561 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:54.561 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:54.561 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:54.561 06:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:57.848 Initializing NVMe Controllers 00:25:57.848 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:57.848 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:57.848 Initialization complete. Launching workers. 00:25:57.848 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30445, failed: 0 00:25:57.848 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30445, failed to submit 0 00:25:57.848 success 0, unsuccessful 30445, failed 0 00:25:57.848 06:21:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:57.848 06:21:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:01.133 Initializing NVMe Controllers 00:26:01.133 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:01.133 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:01.133 Initialization complete. Launching workers. 00:26:01.133 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 62345, failed: 0 00:26:01.133 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25913, failed to submit 36432 00:26:01.133 success 0, unsuccessful 25913, failed 0 00:26:01.133 06:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:01.133 06:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:04.436 Initializing NVMe Controllers 00:26:04.436 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:04.436 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:04.436 Initialization complete. Launching workers. 00:26:04.436 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80871, failed: 0 00:26:04.436 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20170, failed to submit 60701 00:26:04.436 success 0, unsuccessful 20170, failed 0 00:26:04.436 06:21:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:26:04.436 06:21:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:04.436 06:21:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:26:04.436 06:21:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:04.436 06:21:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:04.436 06:21:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:04.436 06:21:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:04.436 06:21:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:04.436 06:21:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:04.436 06:21:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:04.762 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:08.958 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:09.217 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:09.217 00:26:09.217 real 0m15.775s 00:26:09.217 user 0m6.411s 00:26:09.217 sys 0m6.739s 00:26:09.217 06:21:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:09.217 06:21:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:09.217 ************************************ 00:26:09.217 END TEST kernel_target_abort 00:26:09.217 ************************************ 00:26:09.217 06:21:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:09.217 06:21:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:26:09.217 06:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:09.217 06:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:26:09.217 06:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:09.217 06:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:26:09.217 06:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:09.217 06:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:09.217 rmmod nvme_tcp 00:26:09.217 rmmod nvme_fabrics 00:26:09.217 rmmod nvme_keyring 00:26:09.217 06:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:09.217 06:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:26:09.217 06:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:26:09.217 Process with pid 84656 is not found 00:26:09.217 06:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84656 ']' 00:26:09.217 06:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84656 00:26:09.217 06:21:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84656 ']' 00:26:09.217 06:21:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84656 00:26:09.217 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84656) - No such process 00:26:09.217 06:21:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84656 is not found' 00:26:09.217 06:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:26:09.217 06:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:09.785 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:09.785 Waiting for block devices as requested 00:26:09.785 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:09.785 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:10.045 06:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:10.045 06:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:10.045 06:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:26:10.045 06:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:26:10.045 06:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:10.045 06:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:26:10.045 06:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:10.045 06:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:10.045 06:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:10.045 06:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:10.045 06:21:15 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:10.045 06:21:15 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:10.045 06:21:15 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:10.045 06:21:15 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:10.045 06:21:15 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:10.045 06:21:15 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:10.045 06:21:15 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:10.045 06:21:15 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:10.045 06:21:15 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:10.317 06:21:15 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:10.317 06:21:15 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:10.317 06:21:15 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:10.317 06:21:15 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.317 06:21:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:10.317 06:21:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.317 06:21:15 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:26:10.317 00:26:10.317 real 0m31.077s 00:26:10.317 user 0m54.510s 00:26:10.317 sys 0m10.265s 00:26:10.317 06:21:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:10.317 ************************************ 00:26:10.317 END TEST nvmf_abort_qd_sizes 00:26:10.317 ************************************ 00:26:10.317 06:21:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:10.317 06:21:15 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:26:10.317 06:21:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:10.317 06:21:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:10.317 06:21:15 -- common/autotest_common.sh@10 -- # set +x 00:26:10.317 ************************************ 00:26:10.317 START TEST keyring_file 00:26:10.317 ************************************ 00:26:10.318 06:21:15 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:26:10.318 * Looking for test storage... 00:26:10.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:26:10.318 06:21:15 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:10.318 06:21:15 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:26:10.318 06:21:15 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:10.588 06:21:15 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:10.588 06:21:15 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:10.588 06:21:15 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:10.588 06:21:15 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:10.588 06:21:15 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:26:10.588 06:21:15 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:26:10.588 06:21:15 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:26:10.588 06:21:15 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:26:10.588 06:21:15 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:26:10.588 06:21:15 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:26:10.588 06:21:15 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:26:10.588 06:21:15 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:10.588 06:21:15 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:26:10.588 06:21:15 keyring_file -- scripts/common.sh@345 -- # : 1 00:26:10.588 06:21:15 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:10.588 06:21:15 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:10.589 06:21:15 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:26:10.589 06:21:15 keyring_file -- scripts/common.sh@353 -- # local d=1 00:26:10.589 06:21:15 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:10.589 06:21:15 keyring_file -- scripts/common.sh@355 -- # echo 1 00:26:10.589 06:21:15 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:26:10.589 06:21:15 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:26:10.589 06:21:15 keyring_file -- scripts/common.sh@353 -- # local d=2 00:26:10.589 06:21:15 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:10.589 06:21:15 keyring_file -- scripts/common.sh@355 -- # echo 2 00:26:10.589 06:21:15 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:26:10.589 06:21:15 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:10.589 06:21:15 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:10.589 06:21:15 keyring_file -- scripts/common.sh@368 -- # return 0 00:26:10.589 06:21:15 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:10.589 06:21:15 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:10.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.589 --rc genhtml_branch_coverage=1 00:26:10.589 --rc genhtml_function_coverage=1 00:26:10.589 --rc genhtml_legend=1 00:26:10.589 --rc geninfo_all_blocks=1 00:26:10.589 --rc geninfo_unexecuted_blocks=1 00:26:10.589 00:26:10.589 ' 00:26:10.589 06:21:15 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:10.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.589 --rc genhtml_branch_coverage=1 00:26:10.589 --rc genhtml_function_coverage=1 00:26:10.589 --rc genhtml_legend=1 00:26:10.589 --rc geninfo_all_blocks=1 00:26:10.589 --rc geninfo_unexecuted_blocks=1 00:26:10.589 00:26:10.589 ' 00:26:10.589 06:21:15 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:10.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.589 --rc genhtml_branch_coverage=1 00:26:10.589 --rc genhtml_function_coverage=1 00:26:10.589 --rc genhtml_legend=1 00:26:10.589 --rc geninfo_all_blocks=1 00:26:10.589 --rc geninfo_unexecuted_blocks=1 00:26:10.589 00:26:10.589 ' 00:26:10.589 06:21:15 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:10.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.589 --rc genhtml_branch_coverage=1 00:26:10.589 --rc genhtml_function_coverage=1 00:26:10.589 --rc genhtml_legend=1 00:26:10.589 --rc geninfo_all_blocks=1 00:26:10.589 --rc geninfo_unexecuted_blocks=1 00:26:10.589 00:26:10.589 ' 00:26:10.589 06:21:15 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:26:10.589 06:21:15 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:10.589 06:21:15 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:26:10.589 06:21:15 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:10.589 06:21:15 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:10.589 06:21:15 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:10.589 06:21:15 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.589 06:21:15 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.589 06:21:15 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.589 06:21:15 keyring_file -- paths/export.sh@5 -- # export PATH 00:26:10.589 06:21:15 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@51 -- # : 0 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:10.589 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:10.589 06:21:15 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:26:10.589 06:21:15 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:26:10.589 06:21:15 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:26:10.589 06:21:15 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:26:10.589 06:21:15 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:26:10.589 06:21:15 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:26:10.589 06:21:15 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:10.589 06:21:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:10.589 06:21:15 keyring_file -- keyring/common.sh@17 -- # name=key0 00:26:10.589 06:21:15 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:10.589 06:21:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:10.589 06:21:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:10.589 06:21:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Zl9NzwMLMg 00:26:10.589 06:21:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@733 -- # python - 00:26:10.589 06:21:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Zl9NzwMLMg 00:26:10.589 06:21:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Zl9NzwMLMg 00:26:10.589 06:21:15 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Zl9NzwMLMg 00:26:10.589 06:21:15 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:26:10.589 06:21:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:10.589 06:21:15 keyring_file -- keyring/common.sh@17 -- # name=key1 00:26:10.589 06:21:15 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:26:10.589 06:21:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:10.589 06:21:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:10.589 06:21:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YX9qLWrzgf 00:26:10.589 06:21:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:26:10.589 06:21:15 keyring_file -- nvmf/common.sh@733 -- # python - 00:26:10.589 06:21:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YX9qLWrzgf 00:26:10.589 06:21:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YX9qLWrzgf 00:26:10.589 06:21:15 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.YX9qLWrzgf 00:26:10.589 06:21:15 keyring_file -- keyring/file.sh@30 -- # tgtpid=85580 00:26:10.589 06:21:15 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:10.589 06:21:15 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85580 00:26:10.590 06:21:15 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85580 ']' 00:26:10.590 06:21:15 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:10.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:10.590 06:21:15 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:10.590 06:21:15 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:10.590 06:21:15 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:10.590 06:21:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:10.849 [2024-11-27 06:21:15.735589] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:26:10.849 [2024-11-27 06:21:15.735712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85580 ] 00:26:10.849 [2024-11-27 06:21:15.897167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.109 [2024-11-27 06:21:15.981774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.109 [2024-11-27 06:21:16.087109] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:11.676 06:21:16 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:11.676 06:21:16 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:26:11.676 06:21:16 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:26:11.676 06:21:16 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.676 06:21:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:11.934 [2024-11-27 06:21:16.773423] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:11.934 null0 00:26:11.934 [2024-11-27 06:21:16.805438] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:11.934 [2024-11-27 06:21:16.805884] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:11.934 06:21:16 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.934 06:21:16 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:11.934 06:21:16 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:26:11.934 06:21:16 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:11.934 06:21:16 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:11.934 06:21:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:11.934 06:21:16 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:11.934 06:21:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:11.934 06:21:16 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:11.934 06:21:16 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.934 06:21:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:11.934 [2024-11-27 06:21:16.837456] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:26:11.934 request: 00:26:11.934 { 00:26:11.935 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:26:11.935 "secure_channel": false, 00:26:11.935 "listen_address": { 00:26:11.935 "trtype": "tcp", 00:26:11.935 "traddr": "127.0.0.1", 00:26:11.935 "trsvcid": "4420" 00:26:11.935 }, 00:26:11.935 "method": "nvmf_subsystem_add_listener", 00:26:11.935 "req_id": 1 00:26:11.935 } 00:26:11.935 Got JSON-RPC error response 00:26:11.935 response: 00:26:11.935 { 00:26:11.935 "code": -32602, 00:26:11.935 "message": "Invalid parameters" 00:26:11.935 } 00:26:11.935 06:21:16 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:11.935 06:21:16 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:26:11.935 06:21:16 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:11.935 06:21:16 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:11.935 06:21:16 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:11.935 06:21:16 keyring_file -- keyring/file.sh@47 -- # bperfpid=85597 00:26:11.935 06:21:16 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:26:11.935 06:21:16 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85597 /var/tmp/bperf.sock 00:26:11.935 06:21:16 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85597 ']' 00:26:11.935 06:21:16 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:11.935 06:21:16 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:11.935 06:21:16 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:11.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:11.935 06:21:16 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:11.935 06:21:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:11.935 [2024-11-27 06:21:16.901888] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:26:11.935 [2024-11-27 06:21:16.901985] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85597 ] 00:26:12.194 [2024-11-27 06:21:17.044807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.194 [2024-11-27 06:21:17.110204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.194 [2024-11-27 06:21:17.173795] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:12.194 06:21:17 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:12.194 06:21:17 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:26:12.194 06:21:17 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zl9NzwMLMg 00:26:12.194 06:21:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Zl9NzwMLMg 00:26:12.453 06:21:17 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.YX9qLWrzgf 00:26:12.453 06:21:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.YX9qLWrzgf 00:26:12.711 06:21:17 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:26:12.711 06:21:17 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:26:12.711 06:21:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:12.711 06:21:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:12.711 06:21:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:12.973 06:21:18 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Zl9NzwMLMg == \/\t\m\p\/\t\m\p\.\Z\l\9\N\z\w\M\L\M\g ]] 00:26:12.973 06:21:18 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:26:12.973 06:21:18 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:26:13.233 06:21:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:13.233 06:21:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:13.233 06:21:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:13.491 06:21:18 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.YX9qLWrzgf == \/\t\m\p\/\t\m\p\.\Y\X\9\q\L\W\r\z\g\f ]] 00:26:13.491 06:21:18 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:26:13.491 06:21:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:13.491 06:21:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:13.491 06:21:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:13.491 06:21:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:13.491 06:21:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:13.750 06:21:18 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:26:13.750 06:21:18 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:26:13.750 06:21:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:13.750 06:21:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:13.750 06:21:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:13.750 06:21:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:13.750 06:21:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:14.009 06:21:18 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:26:14.009 06:21:18 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:14.009 06:21:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:14.285 [2024-11-27 06:21:19.209923] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:14.285 nvme0n1 00:26:14.285 06:21:19 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:26:14.285 06:21:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:14.285 06:21:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:14.285 06:21:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:14.285 06:21:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:14.285 06:21:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:14.548 06:21:19 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:26:14.548 06:21:19 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:26:14.549 06:21:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:14.549 06:21:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:14.549 06:21:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:14.549 06:21:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:14.549 06:21:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:14.808 06:21:19 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:26:14.808 06:21:19 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:15.066 Running I/O for 1 seconds... 00:26:16.002 12187.00 IOPS, 47.61 MiB/s 00:26:16.002 Latency(us) 00:26:16.002 [2024-11-27T06:21:21.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.002 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:26:16.002 nvme0n1 : 1.01 12244.09 47.83 0.00 0.00 10431.00 4349.21 18707.55 00:26:16.002 [2024-11-27T06:21:21.099Z] =================================================================================================================== 00:26:16.002 [2024-11-27T06:21:21.099Z] Total : 12244.09 47.83 0.00 0.00 10431.00 4349.21 18707.55 00:26:16.002 { 00:26:16.002 "results": [ 00:26:16.002 { 00:26:16.002 "job": "nvme0n1", 00:26:16.002 "core_mask": "0x2", 00:26:16.002 "workload": "randrw", 00:26:16.002 "percentage": 50, 00:26:16.002 "status": "finished", 00:26:16.002 "queue_depth": 128, 00:26:16.002 "io_size": 4096, 00:26:16.002 "runtime": 1.005955, 00:26:16.002 "iops": 12244.086465100327, 00:26:16.002 "mibps": 47.82846275429815, 00:26:16.002 "io_failed": 0, 00:26:16.002 "io_timeout": 0, 00:26:16.002 "avg_latency_us": 10430.999539143977, 00:26:16.002 "min_latency_us": 4349.2072727272725, 00:26:16.002 "max_latency_us": 18707.54909090909 00:26:16.002 } 00:26:16.002 ], 00:26:16.002 "core_count": 1 00:26:16.002 } 00:26:16.002 06:21:20 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:16.003 06:21:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:16.261 06:21:21 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:26:16.261 06:21:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:16.261 06:21:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:16.261 06:21:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:16.261 06:21:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:16.261 06:21:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:16.521 06:21:21 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:26:16.521 06:21:21 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:26:16.779 06:21:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:16.779 06:21:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:16.779 06:21:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:16.779 06:21:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:16.779 06:21:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:17.037 06:21:21 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:26:17.037 06:21:21 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:17.037 06:21:21 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:26:17.037 06:21:21 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:17.037 06:21:21 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:26:17.037 06:21:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:17.037 06:21:21 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:26:17.037 06:21:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:17.037 06:21:21 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:17.037 06:21:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:17.295 [2024-11-27 06:21:22.192527] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:17.295 [2024-11-27 06:21:22.192601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7105d0 (107): Transport endpoint is not connected 00:26:17.295 [2024-11-27 06:21:22.193583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7105d0 (9): Bad file descriptor 00:26:17.295 [2024-11-27 06:21:22.194591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:26:17.295 [2024-11-27 06:21:22.194781] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:26:17.295 [2024-11-27 06:21:22.194798] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:26:17.295 [2024-11-27 06:21:22.194812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:26:17.295 request: 00:26:17.295 { 00:26:17.295 "name": "nvme0", 00:26:17.295 "trtype": "tcp", 00:26:17.295 "traddr": "127.0.0.1", 00:26:17.295 "adrfam": "ipv4", 00:26:17.295 "trsvcid": "4420", 00:26:17.295 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:17.295 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:17.295 "prchk_reftag": false, 00:26:17.295 "prchk_guard": false, 00:26:17.295 "hdgst": false, 00:26:17.295 "ddgst": false, 00:26:17.295 "psk": "key1", 00:26:17.295 "allow_unrecognized_csi": false, 00:26:17.295 "method": "bdev_nvme_attach_controller", 00:26:17.295 "req_id": 1 00:26:17.295 } 00:26:17.295 Got JSON-RPC error response 00:26:17.295 response: 00:26:17.295 { 00:26:17.295 "code": -5, 00:26:17.295 "message": "Input/output error" 00:26:17.295 } 00:26:17.295 06:21:22 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:26:17.295 06:21:22 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:17.295 06:21:22 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:17.295 06:21:22 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:17.295 06:21:22 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:26:17.295 06:21:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:17.295 06:21:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:17.295 06:21:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:17.295 06:21:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:17.295 06:21:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:17.553 06:21:22 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:26:17.553 06:21:22 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:26:17.553 06:21:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:17.553 06:21:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:17.553 06:21:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:17.553 06:21:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:17.553 06:21:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:17.812 06:21:22 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:26:17.812 06:21:22 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:26:17.812 06:21:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:18.072 06:21:23 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:26:18.072 06:21:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:26:18.333 06:21:23 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:26:18.333 06:21:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:18.333 06:21:23 keyring_file -- keyring/file.sh@78 -- # jq length 00:26:18.592 06:21:23 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:26:18.592 06:21:23 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.Zl9NzwMLMg 00:26:18.592 06:21:23 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zl9NzwMLMg 00:26:18.592 06:21:23 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:26:18.592 06:21:23 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zl9NzwMLMg 00:26:18.592 06:21:23 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:26:18.592 06:21:23 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:18.592 06:21:23 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:26:18.592 06:21:23 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:18.592 06:21:23 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zl9NzwMLMg 00:26:18.592 06:21:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Zl9NzwMLMg 00:26:18.852 [2024-11-27 06:21:23.888896] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Zl9NzwMLMg': 0100660 00:26:18.852 [2024-11-27 06:21:23.888938] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:26:18.852 request: 00:26:18.852 { 00:26:18.852 "name": "key0", 00:26:18.852 "path": "/tmp/tmp.Zl9NzwMLMg", 00:26:18.852 "method": "keyring_file_add_key", 00:26:18.852 "req_id": 1 00:26:18.852 } 00:26:18.852 Got JSON-RPC error response 00:26:18.852 response: 00:26:18.852 { 00:26:18.852 "code": -1, 00:26:18.852 "message": "Operation not permitted" 00:26:18.852 } 00:26:18.852 06:21:23 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:26:18.852 06:21:23 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:18.852 06:21:23 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:18.852 06:21:23 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:18.852 06:21:23 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.Zl9NzwMLMg 00:26:18.852 06:21:23 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zl9NzwMLMg 00:26:18.852 06:21:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Zl9NzwMLMg 00:26:19.111 06:21:24 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.Zl9NzwMLMg 00:26:19.111 06:21:24 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:26:19.111 06:21:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:19.111 06:21:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:19.111 06:21:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:19.111 06:21:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:19.111 06:21:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:19.679 06:21:24 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:26:19.679 06:21:24 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:19.679 06:21:24 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:26:19.679 06:21:24 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:19.679 06:21:24 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:26:19.679 06:21:24 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:19.679 06:21:24 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:26:19.679 06:21:24 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:19.679 06:21:24 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:19.679 06:21:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:19.679 [2024-11-27 06:21:24.725202] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Zl9NzwMLMg': No such file or directory 00:26:19.679 [2024-11-27 06:21:24.725271] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:26:19.679 [2024-11-27 06:21:24.725292] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:26:19.679 [2024-11-27 06:21:24.725300] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:26:19.679 [2024-11-27 06:21:24.725309] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:19.679 [2024-11-27 06:21:24.725317] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:26:19.679 request: 00:26:19.679 { 00:26:19.679 "name": "nvme0", 00:26:19.679 "trtype": "tcp", 00:26:19.679 "traddr": "127.0.0.1", 00:26:19.679 "adrfam": "ipv4", 00:26:19.679 "trsvcid": "4420", 00:26:19.679 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:19.679 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:19.679 "prchk_reftag": false, 00:26:19.679 "prchk_guard": false, 00:26:19.679 "hdgst": false, 00:26:19.679 "ddgst": false, 00:26:19.679 "psk": "key0", 00:26:19.679 "allow_unrecognized_csi": false, 00:26:19.679 "method": "bdev_nvme_attach_controller", 00:26:19.680 "req_id": 1 00:26:19.680 } 00:26:19.680 Got JSON-RPC error response 00:26:19.680 response: 00:26:19.680 { 00:26:19.680 "code": -19, 00:26:19.680 "message": "No such device" 00:26:19.680 } 00:26:19.680 06:21:24 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:26:19.680 06:21:24 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:19.680 06:21:24 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:19.680 06:21:24 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:19.680 06:21:24 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:26:19.680 06:21:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:20.263 06:21:25 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:20.263 06:21:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:20.263 06:21:25 keyring_file -- keyring/common.sh@17 -- # name=key0 00:26:20.263 06:21:25 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:20.263 06:21:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:20.263 06:21:25 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:20.263 06:21:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.A8QLSMXfw3 00:26:20.263 06:21:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:20.263 06:21:25 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:20.263 06:21:25 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:26:20.263 06:21:25 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:26:20.263 06:21:25 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:26:20.263 06:21:25 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:26:20.263 06:21:25 keyring_file -- nvmf/common.sh@733 -- # python - 00:26:20.263 06:21:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.A8QLSMXfw3 00:26:20.263 06:21:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.A8QLSMXfw3 00:26:20.263 06:21:25 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.A8QLSMXfw3 00:26:20.263 06:21:25 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.A8QLSMXfw3 00:26:20.263 06:21:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.A8QLSMXfw3 00:26:20.532 06:21:25 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:20.532 06:21:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:20.791 nvme0n1 00:26:20.791 06:21:25 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:26:20.791 06:21:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:20.791 06:21:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:20.791 06:21:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:20.791 06:21:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:20.791 06:21:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:21.358 06:21:26 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:26:21.358 06:21:26 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:26:21.358 06:21:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:21.358 06:21:26 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:26:21.358 06:21:26 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:26:21.358 06:21:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:21.358 06:21:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:21.358 06:21:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:21.925 06:21:26 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:26:21.925 06:21:26 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:26:21.925 06:21:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:21.925 06:21:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:21.925 06:21:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:21.925 06:21:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:21.925 06:21:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:21.925 06:21:26 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:26:21.925 06:21:26 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:21.925 06:21:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:22.491 06:21:27 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:26:22.491 06:21:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:22.491 06:21:27 keyring_file -- keyring/file.sh@105 -- # jq length 00:26:22.491 06:21:27 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:26:22.491 06:21:27 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.A8QLSMXfw3 00:26:22.491 06:21:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.A8QLSMXfw3 00:26:22.749 06:21:27 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.YX9qLWrzgf 00:26:22.750 06:21:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.YX9qLWrzgf 00:26:23.316 06:21:28 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:23.316 06:21:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:23.575 nvme0n1 00:26:23.575 06:21:28 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:26:23.575 06:21:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:26:23.834 06:21:28 keyring_file -- keyring/file.sh@113 -- # config='{ 00:26:23.834 "subsystems": [ 00:26:23.834 { 00:26:23.834 "subsystem": "keyring", 00:26:23.834 "config": [ 00:26:23.834 { 00:26:23.834 "method": "keyring_file_add_key", 00:26:23.834 "params": { 00:26:23.834 "name": "key0", 00:26:23.834 "path": "/tmp/tmp.A8QLSMXfw3" 00:26:23.834 } 00:26:23.834 }, 00:26:23.834 { 00:26:23.834 "method": "keyring_file_add_key", 00:26:23.834 "params": { 00:26:23.834 "name": "key1", 00:26:23.834 "path": "/tmp/tmp.YX9qLWrzgf" 00:26:23.834 } 00:26:23.834 } 00:26:23.834 ] 00:26:23.834 }, 00:26:23.834 { 00:26:23.834 "subsystem": "iobuf", 00:26:23.834 "config": [ 00:26:23.834 { 00:26:23.834 "method": "iobuf_set_options", 00:26:23.834 "params": { 00:26:23.834 "small_pool_count": 8192, 00:26:23.834 "large_pool_count": 1024, 00:26:23.834 "small_bufsize": 8192, 00:26:23.834 "large_bufsize": 135168, 00:26:23.834 "enable_numa": false 00:26:23.834 } 00:26:23.834 } 00:26:23.834 ] 00:26:23.834 }, 00:26:23.834 { 00:26:23.834 "subsystem": "sock", 00:26:23.834 "config": [ 00:26:23.834 { 00:26:23.834 "method": "sock_set_default_impl", 00:26:23.834 "params": { 00:26:23.834 "impl_name": "uring" 00:26:23.834 } 00:26:23.834 }, 00:26:23.834 { 00:26:23.834 "method": "sock_impl_set_options", 00:26:23.834 "params": { 00:26:23.834 "impl_name": "ssl", 00:26:23.835 "recv_buf_size": 4096, 00:26:23.835 "send_buf_size": 4096, 00:26:23.835 "enable_recv_pipe": true, 00:26:23.835 "enable_quickack": false, 00:26:23.835 "enable_placement_id": 0, 00:26:23.835 "enable_zerocopy_send_server": true, 00:26:23.835 "enable_zerocopy_send_client": false, 00:26:23.835 "zerocopy_threshold": 0, 00:26:23.835 "tls_version": 0, 00:26:23.835 "enable_ktls": false 00:26:23.835 } 00:26:23.835 }, 00:26:23.835 { 00:26:23.835 "method": "sock_impl_set_options", 00:26:23.835 "params": { 00:26:23.835 "impl_name": "posix", 00:26:23.835 "recv_buf_size": 2097152, 00:26:23.835 "send_buf_size": 2097152, 00:26:23.835 "enable_recv_pipe": true, 00:26:23.835 "enable_quickack": false, 00:26:23.835 "enable_placement_id": 0, 00:26:23.835 "enable_zerocopy_send_server": true, 00:26:23.835 "enable_zerocopy_send_client": false, 00:26:23.835 "zerocopy_threshold": 0, 00:26:23.835 "tls_version": 0, 00:26:23.835 "enable_ktls": false 00:26:23.835 } 00:26:23.835 }, 00:26:23.835 { 00:26:23.835 "method": "sock_impl_set_options", 00:26:23.835 "params": { 00:26:23.835 "impl_name": "uring", 00:26:23.835 "recv_buf_size": 2097152, 00:26:23.835 "send_buf_size": 2097152, 00:26:23.835 "enable_recv_pipe": true, 00:26:23.835 "enable_quickack": false, 00:26:23.835 "enable_placement_id": 0, 00:26:23.835 "enable_zerocopy_send_server": false, 00:26:23.835 "enable_zerocopy_send_client": false, 00:26:23.835 "zerocopy_threshold": 0, 00:26:23.835 "tls_version": 0, 00:26:23.835 "enable_ktls": false 00:26:23.835 } 00:26:23.835 } 00:26:23.835 ] 00:26:23.835 }, 00:26:23.835 { 00:26:23.835 "subsystem": "vmd", 00:26:23.835 "config": [] 00:26:23.835 }, 00:26:23.835 { 00:26:23.835 "subsystem": "accel", 00:26:23.835 "config": [ 00:26:23.835 { 00:26:23.835 "method": "accel_set_options", 00:26:23.835 "params": { 00:26:23.835 "small_cache_size": 128, 00:26:23.835 "large_cache_size": 16, 00:26:23.835 "task_count": 2048, 00:26:23.835 "sequence_count": 2048, 00:26:23.835 "buf_count": 2048 00:26:23.835 } 00:26:23.835 } 00:26:23.835 ] 00:26:23.835 }, 00:26:23.835 { 00:26:23.835 "subsystem": "bdev", 00:26:23.835 "config": [ 00:26:23.835 { 00:26:23.835 "method": "bdev_set_options", 00:26:23.835 "params": { 00:26:23.835 "bdev_io_pool_size": 65535, 00:26:23.835 "bdev_io_cache_size": 256, 00:26:23.835 "bdev_auto_examine": true, 00:26:23.835 "iobuf_small_cache_size": 128, 00:26:23.835 "iobuf_large_cache_size": 16 00:26:23.835 } 00:26:23.835 }, 00:26:23.835 { 00:26:23.835 "method": "bdev_raid_set_options", 00:26:23.835 "params": { 00:26:23.835 "process_window_size_kb": 1024, 00:26:23.835 "process_max_bandwidth_mb_sec": 0 00:26:23.835 } 00:26:23.835 }, 00:26:23.835 { 00:26:23.835 "method": "bdev_iscsi_set_options", 00:26:23.835 "params": { 00:26:23.835 "timeout_sec": 30 00:26:23.835 } 00:26:23.835 }, 00:26:23.835 { 00:26:23.835 "method": "bdev_nvme_set_options", 00:26:23.835 "params": { 00:26:23.835 "action_on_timeout": "none", 00:26:23.835 "timeout_us": 0, 00:26:23.835 "timeout_admin_us": 0, 00:26:23.835 "keep_alive_timeout_ms": 10000, 00:26:23.835 "arbitration_burst": 0, 00:26:23.835 "low_priority_weight": 0, 00:26:23.835 "medium_priority_weight": 0, 00:26:23.835 "high_priority_weight": 0, 00:26:23.835 "nvme_adminq_poll_period_us": 10000, 00:26:23.835 "nvme_ioq_poll_period_us": 0, 00:26:23.835 "io_queue_requests": 512, 00:26:23.835 "delay_cmd_submit": true, 00:26:23.835 "transport_retry_count": 4, 00:26:23.835 "bdev_retry_count": 3, 00:26:23.835 "transport_ack_timeout": 0, 00:26:23.835 "ctrlr_loss_timeout_sec": 0, 00:26:23.835 "reconnect_delay_sec": 0, 00:26:23.835 "fast_io_fail_timeout_sec": 0, 00:26:23.835 "disable_auto_failback": false, 00:26:23.835 "generate_uuids": false, 00:26:23.835 "transport_tos": 0, 00:26:23.835 "nvme_error_stat": false, 00:26:23.835 "rdma_srq_size": 0, 00:26:23.835 "io_path_stat": false, 00:26:23.835 "allow_accel_sequence": false, 00:26:23.835 "rdma_max_cq_size": 0, 00:26:23.835 "rdma_cm_event_timeout_ms": 0, 00:26:23.835 "dhchap_digests": [ 00:26:23.835 "sha256", 00:26:23.835 "sha384", 00:26:23.835 "sha512" 00:26:23.835 ], 00:26:23.835 "dhchap_dhgroups": [ 00:26:23.835 "null", 00:26:23.835 "ffdhe2048", 00:26:23.835 "ffdhe3072", 00:26:23.835 "ffdhe4096", 00:26:23.835 "ffdhe6144", 00:26:23.835 "ffdhe8192" 00:26:23.835 ] 00:26:23.835 } 00:26:23.835 }, 00:26:23.835 { 00:26:23.835 "method": "bdev_nvme_attach_controller", 00:26:23.835 "params": { 00:26:23.835 "name": "nvme0", 00:26:23.835 "trtype": "TCP", 00:26:23.835 "adrfam": "IPv4", 00:26:23.835 "traddr": "127.0.0.1", 00:26:23.835 "trsvcid": "4420", 00:26:23.835 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.835 "prchk_reftag": false, 00:26:23.835 "prchk_guard": false, 00:26:23.835 "ctrlr_loss_timeout_sec": 0, 00:26:23.835 "reconnect_delay_sec": 0, 00:26:23.835 "fast_io_fail_timeout_sec": 0, 00:26:23.835 "psk": "key0", 00:26:23.835 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:23.835 "hdgst": false, 00:26:23.835 "ddgst": false, 00:26:23.835 "multipath": "multipath" 00:26:23.835 } 00:26:23.835 }, 00:26:23.835 { 00:26:23.835 "method": "bdev_nvme_set_hotplug", 00:26:23.835 "params": { 00:26:23.835 "period_us": 100000, 00:26:23.835 "enable": false 00:26:23.835 } 00:26:23.835 }, 00:26:23.835 { 00:26:23.835 "method": "bdev_wait_for_examine" 00:26:23.835 } 00:26:23.835 ] 00:26:23.835 }, 00:26:23.835 { 00:26:23.835 "subsystem": "nbd", 00:26:23.835 "config": [] 00:26:23.835 } 00:26:23.835 ] 00:26:23.835 }' 00:26:23.835 06:21:28 keyring_file -- keyring/file.sh@115 -- # killprocess 85597 00:26:23.835 06:21:28 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85597 ']' 00:26:23.835 06:21:28 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85597 00:26:23.835 06:21:28 keyring_file -- common/autotest_common.sh@959 -- # uname 00:26:23.835 06:21:28 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:23.835 06:21:28 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85597 00:26:23.835 killing process with pid 85597 00:26:23.835 Received shutdown signal, test time was about 1.000000 seconds 00:26:23.835 00:26:23.835 Latency(us) 00:26:23.835 [2024-11-27T06:21:28.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.835 [2024-11-27T06:21:28.932Z] =================================================================================================================== 00:26:23.835 [2024-11-27T06:21:28.932Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:23.835 06:21:28 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:23.835 06:21:28 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:23.835 06:21:28 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85597' 00:26:23.835 06:21:28 keyring_file -- common/autotest_common.sh@973 -- # kill 85597 00:26:23.835 06:21:28 keyring_file -- common/autotest_common.sh@978 -- # wait 85597 00:26:24.096 06:21:29 keyring_file -- keyring/file.sh@118 -- # bperfpid=85846 00:26:24.096 06:21:29 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85846 /var/tmp/bperf.sock 00:26:24.096 06:21:29 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85846 ']' 00:26:24.096 06:21:29 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:24.096 06:21:29 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.096 06:21:29 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:26:24.096 06:21:29 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:24.096 06:21:29 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:26:24.096 "subsystems": [ 00:26:24.096 { 00:26:24.096 "subsystem": "keyring", 00:26:24.096 "config": [ 00:26:24.096 { 00:26:24.096 "method": "keyring_file_add_key", 00:26:24.096 "params": { 00:26:24.096 "name": "key0", 00:26:24.096 "path": "/tmp/tmp.A8QLSMXfw3" 00:26:24.096 } 00:26:24.096 }, 00:26:24.096 { 00:26:24.096 "method": "keyring_file_add_key", 00:26:24.096 "params": { 00:26:24.096 "name": "key1", 00:26:24.096 "path": "/tmp/tmp.YX9qLWrzgf" 00:26:24.096 } 00:26:24.096 } 00:26:24.096 ] 00:26:24.096 }, 00:26:24.096 { 00:26:24.096 "subsystem": "iobuf", 00:26:24.096 "config": [ 00:26:24.096 { 00:26:24.096 "method": "iobuf_set_options", 00:26:24.096 "params": { 00:26:24.096 "small_pool_count": 8192, 00:26:24.096 "large_pool_count": 1024, 00:26:24.096 "small_bufsize": 8192, 00:26:24.096 "large_bufsize": 135168, 00:26:24.096 "enable_numa": false 00:26:24.096 } 00:26:24.096 } 00:26:24.096 ] 00:26:24.096 }, 00:26:24.096 { 00:26:24.096 "subsystem": "sock", 00:26:24.096 "config": [ 00:26:24.096 { 00:26:24.096 "method": "sock_set_default_impl", 00:26:24.096 "params": { 00:26:24.096 "impl_name": "uring" 00:26:24.096 } 00:26:24.096 }, 00:26:24.096 { 00:26:24.096 "method": "sock_impl_set_options", 00:26:24.096 "params": { 00:26:24.096 "impl_name": "ssl", 00:26:24.096 "recv_buf_size": 4096, 00:26:24.096 "send_buf_size": 4096, 00:26:24.096 "enable_recv_pipe": true, 00:26:24.096 "enable_quickack": false, 00:26:24.096 "enable_placement_id": 0, 00:26:24.096 "enable_zerocopy_send_server": true, 00:26:24.096 "enable_zerocopy_send_client": false, 00:26:24.096 "zerocopy_threshold": 0, 00:26:24.096 "tls_version": 0, 00:26:24.096 "enable_ktls": false 00:26:24.096 } 00:26:24.096 }, 00:26:24.096 { 00:26:24.096 "method": "sock_impl_set_options", 00:26:24.096 "params": { 00:26:24.096 "impl_name": "posix", 00:26:24.096 "recv_buf_size": 2097152, 00:26:24.096 "send_buf_size": 2097152, 00:26:24.096 "enable_recv_pipe": true, 00:26:24.096 "enable_quickack": false, 00:26:24.096 "enable_placement_id": 0, 00:26:24.096 "enable_zerocopy_send_server": true, 00:26:24.096 "enable_zerocopy_send_client": false, 00:26:24.096 "zerocopy_threshold": 0, 00:26:24.096 "tls_version": 0, 00:26:24.096 "enable_ktls": false 00:26:24.096 } 00:26:24.096 }, 00:26:24.096 { 00:26:24.096 "method": "sock_impl_set_options", 00:26:24.096 "params": { 00:26:24.096 "impl_name": "uring", 00:26:24.096 "recv_buf_size": 2097152, 00:26:24.096 "send_buf_size": 2097152, 00:26:24.096 "enable_recv_pipe": true, 00:26:24.096 "enable_quickack": false, 00:26:24.096 "enable_placement_id": 0, 00:26:24.096 "enable_zerocopy_send_server": false, 00:26:24.096 "enable_zerocopy_send_client": false, 00:26:24.096 "zerocopy_threshold": 0, 00:26:24.096 "tls_version": 0, 00:26:24.096 "enable_ktls": false 00:26:24.096 } 00:26:24.096 } 00:26:24.096 ] 00:26:24.096 }, 00:26:24.096 { 00:26:24.096 "subsystem": "vmd", 00:26:24.096 "config": [] 00:26:24.096 }, 00:26:24.096 { 00:26:24.096 "subsystem": "accel", 00:26:24.096 "config": [ 00:26:24.096 { 00:26:24.096 "method": "accel_set_options", 00:26:24.096 "params": { 00:26:24.096 "small_cache_size": 128, 00:26:24.096 "large_cache_size": 16, 00:26:24.096 "task_count": 2048, 00:26:24.096 "sequence_count": 2048, 00:26:24.096 "buf_count": 2048 00:26:24.096 } 00:26:24.096 } 00:26:24.096 ] 00:26:24.096 }, 00:26:24.096 { 00:26:24.096 "subsystem": "bdev", 00:26:24.096 "config": [ 00:26:24.096 { 00:26:24.096 "method": "bdev_set_options", 00:26:24.096 "params": { 00:26:24.096 "bdev_io_pool_size": 65535, 00:26:24.096 "bdev_io_cache_size": 256, 00:26:24.096 "bdev_auto_examine": true, 00:26:24.096 "iobuf_small_cache_size": 128, 00:26:24.096 "iobuf_large_cache_size": 16 00:26:24.096 } 00:26:24.096 }, 00:26:24.096 { 00:26:24.096 "method": "bdev_raid_set_options", 00:26:24.096 "params": { 00:26:24.096 "process_window_size_kb": 1024, 00:26:24.096 "process_max_bandwidth_mb_sec": 0 00:26:24.096 } 00:26:24.096 }, 00:26:24.096 { 00:26:24.096 "method": "bdev_iscsi_set_options", 00:26:24.096 "params": { 00:26:24.096 "timeout_sec": 30 00:26:24.096 } 00:26:24.096 }, 00:26:24.096 { 00:26:24.096 "method": "bdev_nvme_set_options", 00:26:24.096 "params": { 00:26:24.096 "action_on_timeout": "none", 00:26:24.096 "timeout_us": 0, 00:26:24.096 "timeout_admin_us": 0, 00:26:24.096 "keep_alive_timeout_ms": 10000, 00:26:24.096 "arbitration_burst": 0, 00:26:24.096 "low_priority_weight": 0, 00:26:24.096 "medium_priority_weight": 0, 00:26:24.096 "high_priority_weight": 0, 00:26:24.096 "nvme_adminq_poll_period_us": 10000, 00:26:24.096 "nvme_ioq_poll_period_us": 0, 00:26:24.096 "io_queue_requests": 512, 00:26:24.096 "delay_cmd_submit": true, 00:26:24.096 "transport_retry_count": 4, 00:26:24.096 "bdev_retry_count": 3, 00:26:24.096 "transport_ack_timeout": 0, 00:26:24.096 "ctrlr_loss_timeout_sec": 0, 00:26:24.096 "reconnect_delay_sec": 0, 00:26:24.096 "fast_io_fail_timeout_sec": 0, 00:26:24.096 "disable_auto_failback": false, 00:26:24.096 "generate_uuids": false, 00:26:24.096 "transport_tos": 0, 00:26:24.096 "nvme_error_stat": false, 00:26:24.096 "rdma_srq_size": 0, 00:26:24.096 "io_path_stat": false, 00:26:24.096 "allow_accel_sequence": false, 00:26:24.096 "rdma_max_cq_size": 0, 00:26:24.096 "rdma_cm_event_timeout_ms": 0, 00:26:24.096 "dhchap_digests": [ 00:26:24.096 "sha256", 00:26:24.096 "sha384", 00:26:24.096 "sha512" 00:26:24.096 ], 00:26:24.096 "dhchap_dhgroups": [ 00:26:24.096 "null", 00:26:24.096 "ffdhe2048", 00:26:24.096 "ffdhe3072", 00:26:24.096 "ffdhe4096", 00:26:24.096 "ffdhe6144", 00:26:24.096 "ffdhe8192" 00:26:24.096 ] 00:26:24.096 } 00:26:24.096 }, 00:26:24.096 { 00:26:24.096 "method": "bdev_nvme_attach_controller", 00:26:24.096 "params": { 00:26:24.096 "name": "nvme0", 00:26:24.096 "trtype": "TCP", 00:26:24.096 "adrfam": "IPv4", 00:26:24.096 "traddr": "127.0.0.1", 00:26:24.096 "trsvcid": "4420", 00:26:24.096 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:24.096 "prchk_reftag": false, 00:26:24.096 "prchk_guard": false, 00:26:24.096 "ctrlr_loss_timeout_sec": 0, 00:26:24.096 "reconnect_delay_sec": 0, 00:26:24.096 "fast_io_fail_timeout_sec": 0, 00:26:24.096 "psk": "key0", 00:26:24.096 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:24.096 "hdgst": false, 00:26:24.096 "ddgst": false, 00:26:24.096 "multipath": "multipath" 00:26:24.096 } 00:26:24.096 }, 00:26:24.096 { 00:26:24.096 "method": "bdev_nvme_set_hotplug", 00:26:24.096 "params": { 00:26:24.096 "period_us": 100000, 00:26:24.096 "enable": false 00:26:24.096 } 00:26:24.097 }, 00:26:24.097 { 00:26:24.097 "method": "bdev_wait_for_examine" 00:26:24.097 } 00:26:24.097 ] 00:26:24.097 }, 00:26:24.097 { 00:26:24.097 "subsystem": "nbd", 00:26:24.097 "config": [] 00:26:24.097 } 00:26:24.097 ] 00:26:24.097 }' 00:26:24.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:24.097 06:21:29 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.097 06:21:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:24.097 [2024-11-27 06:21:29.154493] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:26:24.097 [2024-11-27 06:21:29.154605] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85846 ] 00:26:24.354 [2024-11-27 06:21:29.308018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.354 [2024-11-27 06:21:29.377649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.612 [2024-11-27 06:21:29.524568] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:24.612 [2024-11-27 06:21:29.589667] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:25.179 06:21:30 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:25.179 06:21:30 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:26:25.179 06:21:30 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:26:25.179 06:21:30 keyring_file -- keyring/file.sh@121 -- # jq length 00:26:25.179 06:21:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:25.746 06:21:30 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:26:25.746 06:21:30 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:26:25.746 06:21:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:25.746 06:21:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:25.746 06:21:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:25.746 06:21:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:25.746 06:21:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:26.006 06:21:30 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:26:26.006 06:21:30 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:26:26.006 06:21:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:26.006 06:21:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:26.006 06:21:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:26.006 06:21:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:26.006 06:21:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:26.264 06:21:31 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:26:26.264 06:21:31 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:26:26.264 06:21:31 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:26:26.264 06:21:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:26:26.523 06:21:31 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:26:26.523 06:21:31 keyring_file -- keyring/file.sh@1 -- # cleanup 00:26:26.523 06:21:31 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.A8QLSMXfw3 /tmp/tmp.YX9qLWrzgf 00:26:26.523 06:21:31 keyring_file -- keyring/file.sh@20 -- # killprocess 85846 00:26:26.523 06:21:31 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85846 ']' 00:26:26.523 06:21:31 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85846 00:26:26.523 06:21:31 keyring_file -- common/autotest_common.sh@959 -- # uname 00:26:26.523 06:21:31 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:26.523 06:21:31 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85846 00:26:26.523 killing process with pid 85846 00:26:26.523 Received shutdown signal, test time was about 1.000000 seconds 00:26:26.523 00:26:26.523 Latency(us) 00:26:26.523 [2024-11-27T06:21:31.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.523 [2024-11-27T06:21:31.620Z] =================================================================================================================== 00:26:26.523 [2024-11-27T06:21:31.620Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:26.523 06:21:31 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:26.523 06:21:31 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:26.523 06:21:31 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85846' 00:26:26.523 06:21:31 keyring_file -- common/autotest_common.sh@973 -- # kill 85846 00:26:26.523 06:21:31 keyring_file -- common/autotest_common.sh@978 -- # wait 85846 00:26:26.782 06:21:31 keyring_file -- keyring/file.sh@21 -- # killprocess 85580 00:26:26.782 06:21:31 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85580 ']' 00:26:26.782 06:21:31 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85580 00:26:26.782 06:21:31 keyring_file -- common/autotest_common.sh@959 -- # uname 00:26:26.782 06:21:31 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:26.782 06:21:31 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85580 00:26:26.782 killing process with pid 85580 00:26:26.782 06:21:31 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:26.782 06:21:31 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:26.782 06:21:31 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85580' 00:26:26.782 06:21:31 keyring_file -- common/autotest_common.sh@973 -- # kill 85580 00:26:26.782 06:21:31 keyring_file -- common/autotest_common.sh@978 -- # wait 85580 00:26:27.349 ************************************ 00:26:27.349 END TEST keyring_file 00:26:27.349 ************************************ 00:26:27.349 00:26:27.349 real 0m17.050s 00:26:27.349 user 0m42.321s 00:26:27.349 sys 0m3.511s 00:26:27.349 06:21:32 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:27.349 06:21:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:27.349 06:21:32 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:26:27.349 06:21:32 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:26:27.349 06:21:32 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:27.349 06:21:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:27.349 06:21:32 -- common/autotest_common.sh@10 -- # set +x 00:26:27.350 ************************************ 00:26:27.350 START TEST keyring_linux 00:26:27.350 ************************************ 00:26:27.350 06:21:32 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:26:27.350 Joined session keyring: 811801452 00:26:27.609 * Looking for test storage... 00:26:27.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:26:27.609 06:21:32 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:27.609 06:21:32 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:26:27.609 06:21:32 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:27.609 06:21:32 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@345 -- # : 1 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:27.609 06:21:32 keyring_linux -- scripts/common.sh@368 -- # return 0 00:26:27.609 06:21:32 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:27.609 06:21:32 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:27.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.609 --rc genhtml_branch_coverage=1 00:26:27.609 --rc genhtml_function_coverage=1 00:26:27.609 --rc genhtml_legend=1 00:26:27.609 --rc geninfo_all_blocks=1 00:26:27.609 --rc geninfo_unexecuted_blocks=1 00:26:27.609 00:26:27.609 ' 00:26:27.609 06:21:32 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:27.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.609 --rc genhtml_branch_coverage=1 00:26:27.609 --rc genhtml_function_coverage=1 00:26:27.609 --rc genhtml_legend=1 00:26:27.609 --rc geninfo_all_blocks=1 00:26:27.609 --rc geninfo_unexecuted_blocks=1 00:26:27.609 00:26:27.609 ' 00:26:27.609 06:21:32 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:27.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.609 --rc genhtml_branch_coverage=1 00:26:27.609 --rc genhtml_function_coverage=1 00:26:27.609 --rc genhtml_legend=1 00:26:27.609 --rc geninfo_all_blocks=1 00:26:27.609 --rc geninfo_unexecuted_blocks=1 00:26:27.609 00:26:27.609 ' 00:26:27.609 06:21:32 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:27.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.609 --rc genhtml_branch_coverage=1 00:26:27.609 --rc genhtml_function_coverage=1 00:26:27.609 --rc genhtml_legend=1 00:26:27.609 --rc geninfo_all_blocks=1 00:26:27.609 --rc geninfo_unexecuted_blocks=1 00:26:27.609 00:26:27.609 ' 00:26:27.609 06:21:32 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:26:27.609 06:21:32 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:27.609 06:21:32 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:26:27.609 06:21:32 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:27.609 06:21:32 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:34bde053-797d-42f4-ad97-2a3b315837d0 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=34bde053-797d-42f4-ad97-2a3b315837d0 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:27.610 06:21:32 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:26:27.610 06:21:32 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:27.610 06:21:32 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:27.610 06:21:32 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:27.610 06:21:32 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.610 06:21:32 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.610 06:21:32 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.610 06:21:32 keyring_linux -- paths/export.sh@5 -- # export PATH 00:26:27.610 06:21:32 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:27.610 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:27.610 06:21:32 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:26:27.610 06:21:32 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:26:27.610 06:21:32 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:26:27.610 06:21:32 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:26:27.610 06:21:32 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:26:27.610 06:21:32 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:26:27.610 06:21:32 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:26:27.610 06:21:32 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:26:27.610 06:21:32 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:26:27.610 06:21:32 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:27.610 06:21:32 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:26:27.610 06:21:32 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:26:27.610 06:21:32 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@733 -- # python - 00:26:27.610 06:21:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:26:27.610 /tmp/:spdk-test:key0 00:26:27.610 06:21:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:26:27.610 06:21:32 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:26:27.610 06:21:32 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:26:27.610 06:21:32 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:26:27.610 06:21:32 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:26:27.610 06:21:32 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:26:27.610 06:21:32 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:26:27.610 06:21:32 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:26:27.610 06:21:32 keyring_linux -- nvmf/common.sh@733 -- # python - 00:26:27.870 06:21:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:26:27.870 /tmp/:spdk-test:key1 00:26:27.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.870 06:21:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:26:27.870 06:21:32 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85979 00:26:27.870 06:21:32 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:27.870 06:21:32 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85979 00:26:27.870 06:21:32 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85979 ']' 00:26:27.870 06:21:32 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.870 06:21:32 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:27.870 06:21:32 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.870 06:21:32 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:27.870 06:21:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:27.870 [2024-11-27 06:21:32.770824] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:26:27.870 [2024-11-27 06:21:32.771255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85979 ] 00:26:27.870 [2024-11-27 06:21:32.908439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.129 [2024-11-27 06:21:32.984242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.129 [2024-11-27 06:21:33.072684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:28.389 06:21:33 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:28.389 06:21:33 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:26:28.389 06:21:33 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:26:28.389 06:21:33 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.389 06:21:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:28.389 [2024-11-27 06:21:33.358457] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.389 null0 00:26:28.389 [2024-11-27 06:21:33.390428] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:28.389 [2024-11-27 06:21:33.390681] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:28.389 06:21:33 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.389 06:21:33 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:26:28.389 468636415 00:26:28.389 06:21:33 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:26:28.389 133491076 00:26:28.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:28.389 06:21:33 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85989 00:26:28.389 06:21:33 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:26:28.389 06:21:33 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85989 /var/tmp/bperf.sock 00:26:28.389 06:21:33 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85989 ']' 00:26:28.389 06:21:33 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:28.389 06:21:33 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:28.389 06:21:33 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:28.389 06:21:33 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:28.389 06:21:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:28.389 [2024-11-27 06:21:33.472688] Starting SPDK v25.01-pre git sha1 345c51d49 / DPDK 24.03.0 initialization... 00:26:28.389 [2024-11-27 06:21:33.472988] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85989 ] 00:26:28.647 [2024-11-27 06:21:33.621721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.648 [2024-11-27 06:21:33.685457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.648 06:21:33 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:28.648 06:21:33 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:26:28.648 06:21:33 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:26:28.648 06:21:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:26:29.215 06:21:34 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:26:29.215 06:21:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:29.474 [2024-11-27 06:21:34.351259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:29.474 06:21:34 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:26:29.474 06:21:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:26:29.732 [2024-11-27 06:21:34.704941] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:29.732 nvme0n1 00:26:29.732 06:21:34 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:26:29.732 06:21:34 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:26:29.732 06:21:34 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:26:29.732 06:21:34 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:26:29.732 06:21:34 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:26:29.732 06:21:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:30.301 06:21:35 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:26:30.301 06:21:35 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:26:30.301 06:21:35 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:26:30.301 06:21:35 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:26:30.301 06:21:35 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:26:30.301 06:21:35 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:30.301 06:21:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:30.301 06:21:35 keyring_linux -- keyring/linux.sh@25 -- # sn=468636415 00:26:30.301 06:21:35 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:26:30.301 06:21:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:26:30.301 06:21:35 keyring_linux -- keyring/linux.sh@26 -- # [[ 468636415 == \4\6\8\6\3\6\4\1\5 ]] 00:26:30.301 06:21:35 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 468636415 00:26:30.301 06:21:35 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:26:30.301 06:21:35 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:30.560 Running I/O for 1 seconds... 00:26:31.497 10178.00 IOPS, 39.76 MiB/s 00:26:31.497 Latency(us) 00:26:31.497 [2024-11-27T06:21:36.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.497 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:31.497 nvme0n1 : 1.01 10185.72 39.79 0.00 0.00 12487.94 10187.87 23235.49 00:26:31.497 [2024-11-27T06:21:36.594Z] =================================================================================================================== 00:26:31.497 [2024-11-27T06:21:36.594Z] Total : 10185.72 39.79 0.00 0.00 12487.94 10187.87 23235.49 00:26:31.497 { 00:26:31.497 "results": [ 00:26:31.497 { 00:26:31.497 "job": "nvme0n1", 00:26:31.497 "core_mask": "0x2", 00:26:31.497 "workload": "randread", 00:26:31.497 "status": "finished", 00:26:31.497 "queue_depth": 128, 00:26:31.497 "io_size": 4096, 00:26:31.497 "runtime": 1.011907, 00:26:31.497 "iops": 10185.718648057578, 00:26:31.497 "mibps": 39.787963468974915, 00:26:31.497 "io_failed": 0, 00:26:31.497 "io_timeout": 0, 00:26:31.497 "avg_latency_us": 12487.944135406653, 00:26:31.497 "min_latency_us": 10187.869090909091, 00:26:31.497 "max_latency_us": 23235.49090909091 00:26:31.497 } 00:26:31.497 ], 00:26:31.497 "core_count": 1 00:26:31.497 } 00:26:31.497 06:21:36 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:31.497 06:21:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:31.756 06:21:36 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:26:31.756 06:21:36 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:26:31.756 06:21:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:26:31.757 06:21:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:26:31.757 06:21:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:31.757 06:21:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:26:32.326 06:21:37 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:26:32.326 06:21:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:26:32.326 06:21:37 keyring_linux -- keyring/linux.sh@23 -- # return 00:26:32.326 06:21:37 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:32.326 06:21:37 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:26:32.326 06:21:37 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:32.326 06:21:37 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:26:32.326 06:21:37 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:32.326 06:21:37 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:26:32.326 06:21:37 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:32.326 06:21:37 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:32.326 06:21:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:32.326 [2024-11-27 06:21:37.388265] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:32.326 [2024-11-27 06:21:37.388809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cd5d0 (107): Transport endpoint is not connected 00:26:32.326 [2024-11-27 06:21:37.389797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cd5d0 (9): Bad file descriptor 00:26:32.326 [2024-11-27 06:21:37.390793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:26:32.326 [2024-11-27 06:21:37.391019] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:26:32.326 [2024-11-27 06:21:37.391164] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:26:32.326 [2024-11-27 06:21:37.391363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:26:32.326 request: 00:26:32.326 { 00:26:32.326 "name": "nvme0", 00:26:32.326 "trtype": "tcp", 00:26:32.326 "traddr": "127.0.0.1", 00:26:32.326 "adrfam": "ipv4", 00:26:32.326 "trsvcid": "4420", 00:26:32.326 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:32.326 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:32.326 "prchk_reftag": false, 00:26:32.326 "prchk_guard": false, 00:26:32.326 "hdgst": false, 00:26:32.326 "ddgst": false, 00:26:32.326 "psk": ":spdk-test:key1", 00:26:32.326 "allow_unrecognized_csi": false, 00:26:32.326 "method": "bdev_nvme_attach_controller", 00:26:32.326 "req_id": 1 00:26:32.326 } 00:26:32.326 Got JSON-RPC error response 00:26:32.326 response: 00:26:32.326 { 00:26:32.326 "code": -5, 00:26:32.326 "message": "Input/output error" 00:26:32.326 } 00:26:32.326 06:21:37 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:26:32.326 06:21:37 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:32.326 06:21:37 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:32.326 06:21:37 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:32.326 06:21:37 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:26:32.326 06:21:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:26:32.326 06:21:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:26:32.326 06:21:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:26:32.326 06:21:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:26:32.326 06:21:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:26:32.326 06:21:37 keyring_linux -- keyring/linux.sh@33 -- # sn=468636415 00:26:32.326 06:21:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 468636415 00:26:32.586 1 links removed 00:26:32.586 06:21:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:26:32.586 06:21:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:26:32.586 06:21:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:26:32.586 06:21:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:26:32.586 06:21:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:26:32.586 06:21:37 keyring_linux -- keyring/linux.sh@33 -- # sn=133491076 00:26:32.586 06:21:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 133491076 00:26:32.586 1 links removed 00:26:32.586 06:21:37 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85989 00:26:32.586 06:21:37 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85989 ']' 00:26:32.586 06:21:37 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85989 00:26:32.586 06:21:37 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:26:32.586 06:21:37 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:32.586 06:21:37 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85989 00:26:32.586 killing process with pid 85989 00:26:32.586 Received shutdown signal, test time was about 1.000000 seconds 00:26:32.586 00:26:32.586 Latency(us) 00:26:32.586 [2024-11-27T06:21:37.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.586 [2024-11-27T06:21:37.683Z] =================================================================================================================== 00:26:32.586 [2024-11-27T06:21:37.683Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:32.586 06:21:37 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:32.586 06:21:37 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:32.586 06:21:37 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85989' 00:26:32.586 06:21:37 keyring_linux -- common/autotest_common.sh@973 -- # kill 85989 00:26:32.586 06:21:37 keyring_linux -- common/autotest_common.sh@978 -- # wait 85989 00:26:32.586 06:21:37 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85979 00:26:32.586 06:21:37 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85979 ']' 00:26:32.586 06:21:37 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85979 00:26:32.845 06:21:37 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:26:32.845 06:21:37 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:32.845 06:21:37 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85979 00:26:32.845 killing process with pid 85979 00:26:32.845 06:21:37 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:32.845 06:21:37 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:32.845 06:21:37 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85979' 00:26:32.845 06:21:37 keyring_linux -- common/autotest_common.sh@973 -- # kill 85979 00:26:32.845 06:21:37 keyring_linux -- common/autotest_common.sh@978 -- # wait 85979 00:26:33.413 ************************************ 00:26:33.413 END TEST keyring_linux 00:26:33.413 ************************************ 00:26:33.413 00:26:33.413 real 0m5.908s 00:26:33.413 user 0m11.206s 00:26:33.413 sys 0m1.755s 00:26:33.413 06:21:38 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:33.413 06:21:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:33.413 06:21:38 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:26:33.413 06:21:38 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:26:33.413 06:21:38 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:26:33.413 06:21:38 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:26:33.413 06:21:38 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:26:33.413 06:21:38 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:26:33.414 06:21:38 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:33.414 06:21:38 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:33.414 06:21:38 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:26:33.414 06:21:38 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:26:33.414 06:21:38 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:26:33.414 06:21:38 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:26:33.414 06:21:38 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:26:33.414 06:21:38 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:26:33.414 06:21:38 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:26:33.414 06:21:38 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:26:33.414 06:21:38 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:26:33.414 06:21:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:33.414 06:21:38 -- common/autotest_common.sh@10 -- # set +x 00:26:33.414 06:21:38 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:26:33.414 06:21:38 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:26:33.414 06:21:38 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:26:33.414 06:21:38 -- common/autotest_common.sh@10 -- # set +x 00:26:35.342 INFO: APP EXITING 00:26:35.342 INFO: killing all VMs 00:26:35.342 INFO: killing vhost app 00:26:35.342 INFO: EXIT DONE 00:26:35.912 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:35.912 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:26:35.912 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:26:36.850 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:36.850 Cleaning 00:26:36.850 Removing: /var/run/dpdk/spdk0/config 00:26:36.850 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:36.850 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:36.850 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:36.850 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:36.850 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:36.850 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:36.850 Removing: /var/run/dpdk/spdk1/config 00:26:36.850 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:36.850 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:36.850 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:36.850 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:36.850 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:36.850 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:36.850 Removing: /var/run/dpdk/spdk2/config 00:26:36.850 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:36.850 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:36.850 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:36.850 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:36.850 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:36.850 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:36.850 Removing: /var/run/dpdk/spdk3/config 00:26:36.850 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:36.850 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:36.850 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:36.850 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:36.850 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:36.850 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:36.850 Removing: /var/run/dpdk/spdk4/config 00:26:36.850 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:36.850 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:36.850 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:36.850 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:36.850 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:36.850 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:36.850 Removing: /dev/shm/nvmf_trace.0 00:26:36.850 Removing: /dev/shm/spdk_tgt_trace.pid56948 00:26:36.850 Removing: /var/run/dpdk/spdk0 00:26:36.850 Removing: /var/run/dpdk/spdk1 00:26:36.850 Removing: /var/run/dpdk/spdk2 00:26:36.850 Removing: /var/run/dpdk/spdk3 00:26:36.850 Removing: /var/run/dpdk/spdk4 00:26:36.850 Removing: /var/run/dpdk/spdk_pid56795 00:26:36.850 Removing: /var/run/dpdk/spdk_pid56948 00:26:36.850 Removing: /var/run/dpdk/spdk_pid57152 00:26:36.850 Removing: /var/run/dpdk/spdk_pid57233 00:26:36.850 Removing: /var/run/dpdk/spdk_pid57259 00:26:36.850 Removing: /var/run/dpdk/spdk_pid57368 00:26:36.850 Removing: /var/run/dpdk/spdk_pid57379 00:26:36.850 Removing: /var/run/dpdk/spdk_pid57515 00:26:36.850 Removing: /var/run/dpdk/spdk_pid57725 00:26:36.850 Removing: /var/run/dpdk/spdk_pid57879 00:26:36.850 Removing: /var/run/dpdk/spdk_pid57957 00:26:36.850 Removing: /var/run/dpdk/spdk_pid58028 00:26:36.850 Removing: /var/run/dpdk/spdk_pid58131 00:26:36.850 Removing: /var/run/dpdk/spdk_pid58223 00:26:36.850 Removing: /var/run/dpdk/spdk_pid58256 00:26:36.850 Removing: /var/run/dpdk/spdk_pid58286 00:26:36.850 Removing: /var/run/dpdk/spdk_pid58361 00:26:36.850 Removing: /var/run/dpdk/spdk_pid58453 00:26:36.850 Removing: /var/run/dpdk/spdk_pid58903 00:26:36.850 Removing: /var/run/dpdk/spdk_pid58947 00:26:36.850 Removing: /var/run/dpdk/spdk_pid58991 00:26:36.850 Removing: /var/run/dpdk/spdk_pid58999 00:26:36.850 Removing: /var/run/dpdk/spdk_pid59072 00:26:36.850 Removing: /var/run/dpdk/spdk_pid59088 00:26:36.850 Removing: /var/run/dpdk/spdk_pid59159 00:26:36.850 Removing: /var/run/dpdk/spdk_pid59176 00:26:36.850 Removing: /var/run/dpdk/spdk_pid59222 00:26:36.850 Removing: /var/run/dpdk/spdk_pid59244 00:26:36.850 Removing: /var/run/dpdk/spdk_pid59291 00:26:36.850 Removing: /var/run/dpdk/spdk_pid59309 00:26:36.850 Removing: /var/run/dpdk/spdk_pid59445 00:26:36.850 Removing: /var/run/dpdk/spdk_pid59475 00:26:36.850 Removing: /var/run/dpdk/spdk_pid59565 00:26:36.850 Removing: /var/run/dpdk/spdk_pid59897 00:26:37.109 Removing: /var/run/dpdk/spdk_pid59909 00:26:37.109 Removing: /var/run/dpdk/spdk_pid59940 00:26:37.109 Removing: /var/run/dpdk/spdk_pid59959 00:26:37.109 Removing: /var/run/dpdk/spdk_pid59980 00:26:37.109 Removing: /var/run/dpdk/spdk_pid59999 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60018 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60028 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60053 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60066 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60087 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60106 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60120 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60135 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60160 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60174 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60190 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60214 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60228 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60243 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60280 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60295 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60324 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60396 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60425 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60434 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60463 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60478 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60484 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60528 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60541 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60570 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60585 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60589 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60606 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60610 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60625 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60633 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60644 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60678 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60699 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60714 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60748 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60752 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60765 00:26:37.109 Removing: /var/run/dpdk/spdk_pid60800 00:26:37.110 Removing: /var/run/dpdk/spdk_pid60817 00:26:37.110 Removing: /var/run/dpdk/spdk_pid60849 00:26:37.110 Removing: /var/run/dpdk/spdk_pid60851 00:26:37.110 Removing: /var/run/dpdk/spdk_pid60864 00:26:37.110 Removing: /var/run/dpdk/spdk_pid60866 00:26:37.110 Removing: /var/run/dpdk/spdk_pid60879 00:26:37.110 Removing: /var/run/dpdk/spdk_pid60887 00:26:37.110 Removing: /var/run/dpdk/spdk_pid60894 00:26:37.110 Removing: /var/run/dpdk/spdk_pid60907 00:26:37.110 Removing: /var/run/dpdk/spdk_pid60989 00:26:37.110 Removing: /var/run/dpdk/spdk_pid61037 00:26:37.110 Removing: /var/run/dpdk/spdk_pid61155 00:26:37.110 Removing: /var/run/dpdk/spdk_pid61195 00:26:37.110 Removing: /var/run/dpdk/spdk_pid61234 00:26:37.110 Removing: /var/run/dpdk/spdk_pid61254 00:26:37.110 Removing: /var/run/dpdk/spdk_pid61271 00:26:37.110 Removing: /var/run/dpdk/spdk_pid61291 00:26:37.110 Removing: /var/run/dpdk/spdk_pid61329 00:26:37.110 Removing: /var/run/dpdk/spdk_pid61344 00:26:37.110 Removing: /var/run/dpdk/spdk_pid61421 00:26:37.110 Removing: /var/run/dpdk/spdk_pid61444 00:26:37.110 Removing: /var/run/dpdk/spdk_pid61488 00:26:37.110 Removing: /var/run/dpdk/spdk_pid61561 00:26:37.110 Removing: /var/run/dpdk/spdk_pid61628 00:26:37.110 Removing: /var/run/dpdk/spdk_pid61657 00:26:37.110 Removing: /var/run/dpdk/spdk_pid61756 00:26:37.110 Removing: /var/run/dpdk/spdk_pid61799 00:26:37.110 Removing: /var/run/dpdk/spdk_pid61831 00:26:37.110 Removing: /var/run/dpdk/spdk_pid62063 00:26:37.110 Removing: /var/run/dpdk/spdk_pid62161 00:26:37.110 Removing: /var/run/dpdk/spdk_pid62189 00:26:37.110 Removing: /var/run/dpdk/spdk_pid62219 00:26:37.110 Removing: /var/run/dpdk/spdk_pid62252 00:26:37.110 Removing: /var/run/dpdk/spdk_pid62286 00:26:37.110 Removing: /var/run/dpdk/spdk_pid62320 00:26:37.110 Removing: /var/run/dpdk/spdk_pid62356 00:26:37.110 Removing: /var/run/dpdk/spdk_pid62750 00:26:37.110 Removing: /var/run/dpdk/spdk_pid62790 00:26:37.110 Removing: /var/run/dpdk/spdk_pid63132 00:26:37.369 Removing: /var/run/dpdk/spdk_pid63596 00:26:37.369 Removing: /var/run/dpdk/spdk_pid63873 00:26:37.369 Removing: /var/run/dpdk/spdk_pid64769 00:26:37.369 Removing: /var/run/dpdk/spdk_pid65701 00:26:37.369 Removing: /var/run/dpdk/spdk_pid65824 00:26:37.369 Removing: /var/run/dpdk/spdk_pid65886 00:26:37.369 Removing: /var/run/dpdk/spdk_pid67302 00:26:37.369 Removing: /var/run/dpdk/spdk_pid67617 00:26:37.369 Removing: /var/run/dpdk/spdk_pid71315 00:26:37.369 Removing: /var/run/dpdk/spdk_pid71673 00:26:37.369 Removing: /var/run/dpdk/spdk_pid71782 00:26:37.369 Removing: /var/run/dpdk/spdk_pid71922 00:26:37.369 Removing: /var/run/dpdk/spdk_pid71943 00:26:37.369 Removing: /var/run/dpdk/spdk_pid71964 00:26:37.369 Removing: /var/run/dpdk/spdk_pid71993 00:26:37.369 Removing: /var/run/dpdk/spdk_pid72073 00:26:37.369 Removing: /var/run/dpdk/spdk_pid72214 00:26:37.369 Removing: /var/run/dpdk/spdk_pid72363 00:26:37.369 Removing: /var/run/dpdk/spdk_pid72445 00:26:37.369 Removing: /var/run/dpdk/spdk_pid72631 00:26:37.369 Removing: /var/run/dpdk/spdk_pid72707 00:26:37.369 Removing: /var/run/dpdk/spdk_pid72792 00:26:37.369 Removing: /var/run/dpdk/spdk_pid73152 00:26:37.369 Removing: /var/run/dpdk/spdk_pid73562 00:26:37.369 Removing: /var/run/dpdk/spdk_pid73563 00:26:37.369 Removing: /var/run/dpdk/spdk_pid73564 00:26:37.369 Removing: /var/run/dpdk/spdk_pid73828 00:26:37.369 Removing: /var/run/dpdk/spdk_pid74086 00:26:37.369 Removing: /var/run/dpdk/spdk_pid74474 00:26:37.369 Removing: /var/run/dpdk/spdk_pid74482 00:26:37.369 Removing: /var/run/dpdk/spdk_pid74808 00:26:37.369 Removing: /var/run/dpdk/spdk_pid74828 00:26:37.369 Removing: /var/run/dpdk/spdk_pid74842 00:26:37.369 Removing: /var/run/dpdk/spdk_pid74874 00:26:37.369 Removing: /var/run/dpdk/spdk_pid74885 00:26:37.369 Removing: /var/run/dpdk/spdk_pid75238 00:26:37.369 Removing: /var/run/dpdk/spdk_pid75287 00:26:37.369 Removing: /var/run/dpdk/spdk_pid75607 00:26:37.369 Removing: /var/run/dpdk/spdk_pid75810 00:26:37.369 Removing: /var/run/dpdk/spdk_pid76232 00:26:37.369 Removing: /var/run/dpdk/spdk_pid76775 00:26:37.369 Removing: /var/run/dpdk/spdk_pid77644 00:26:37.369 Removing: /var/run/dpdk/spdk_pid78278 00:26:37.369 Removing: /var/run/dpdk/spdk_pid78281 00:26:37.369 Removing: /var/run/dpdk/spdk_pid80310 00:26:37.369 Removing: /var/run/dpdk/spdk_pid80358 00:26:37.369 Removing: /var/run/dpdk/spdk_pid80423 00:26:37.369 Removing: /var/run/dpdk/spdk_pid80471 00:26:37.369 Removing: /var/run/dpdk/spdk_pid80586 00:26:37.369 Removing: /var/run/dpdk/spdk_pid80633 00:26:37.369 Removing: /var/run/dpdk/spdk_pid80687 00:26:37.369 Removing: /var/run/dpdk/spdk_pid80744 00:26:37.369 Removing: /var/run/dpdk/spdk_pid81096 00:26:37.369 Removing: /var/run/dpdk/spdk_pid82307 00:26:37.369 Removing: /var/run/dpdk/spdk_pid82452 00:26:37.369 Removing: /var/run/dpdk/spdk_pid82696 00:26:37.369 Removing: /var/run/dpdk/spdk_pid83305 00:26:37.369 Removing: /var/run/dpdk/spdk_pid83460 00:26:37.369 Removing: /var/run/dpdk/spdk_pid83617 00:26:37.369 Removing: /var/run/dpdk/spdk_pid83714 00:26:37.369 Removing: /var/run/dpdk/spdk_pid83890 00:26:37.369 Removing: /var/run/dpdk/spdk_pid83999 00:26:37.369 Removing: /var/run/dpdk/spdk_pid84714 00:26:37.369 Removing: /var/run/dpdk/spdk_pid84749 00:26:37.369 Removing: /var/run/dpdk/spdk_pid84786 00:26:37.369 Removing: /var/run/dpdk/spdk_pid85040 00:26:37.369 Removing: /var/run/dpdk/spdk_pid85071 00:26:37.369 Removing: /var/run/dpdk/spdk_pid85106 00:26:37.369 Removing: /var/run/dpdk/spdk_pid85580 00:26:37.369 Removing: /var/run/dpdk/spdk_pid85597 00:26:37.369 Removing: /var/run/dpdk/spdk_pid85846 00:26:37.369 Removing: /var/run/dpdk/spdk_pid85979 00:26:37.369 Removing: /var/run/dpdk/spdk_pid85989 00:26:37.369 Clean 00:26:37.628 06:21:42 -- common/autotest_common.sh@1453 -- # return 0 00:26:37.628 06:21:42 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:26:37.628 06:21:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:37.628 06:21:42 -- common/autotest_common.sh@10 -- # set +x 00:26:37.628 06:21:42 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:26:37.628 06:21:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:37.628 06:21:42 -- common/autotest_common.sh@10 -- # set +x 00:26:37.628 06:21:42 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:37.628 06:21:42 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:37.628 06:21:42 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:37.628 06:21:42 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:26:37.628 06:21:42 -- spdk/autotest.sh@398 -- # hostname 00:26:37.628 06:21:42 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:37.886 geninfo: WARNING: invalid characters removed from testname! 00:27:04.450 06:22:08 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:07.738 06:22:12 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:11.029 06:22:15 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:13.597 06:22:18 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:16.886 06:22:21 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:19.422 06:22:24 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:22.710 06:22:27 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:22.710 06:22:27 -- spdk/autorun.sh@1 -- $ timing_finish 00:27:22.710 06:22:27 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:27:22.710 06:22:27 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:22.710 06:22:27 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:27:22.710 06:22:27 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:22.710 + [[ -n 5377 ]] 00:27:22.710 + sudo kill 5377 00:27:22.749 [Pipeline] } 00:27:22.764 [Pipeline] // timeout 00:27:22.769 [Pipeline] } 00:27:22.784 [Pipeline] // stage 00:27:22.791 [Pipeline] } 00:27:22.806 [Pipeline] // catchError 00:27:22.816 [Pipeline] stage 00:27:22.818 [Pipeline] { (Stop VM) 00:27:22.830 [Pipeline] sh 00:27:23.166 + vagrant halt 00:27:26.451 ==> default: Halting domain... 00:27:33.029 [Pipeline] sh 00:27:33.304 + vagrant destroy -f 00:27:36.602 ==> default: Removing domain... 00:27:36.611 [Pipeline] sh 00:27:36.884 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:27:36.891 [Pipeline] } 00:27:36.906 [Pipeline] // stage 00:27:36.910 [Pipeline] } 00:27:36.922 [Pipeline] // dir 00:27:36.928 [Pipeline] } 00:27:36.942 [Pipeline] // wrap 00:27:36.947 [Pipeline] } 00:27:36.962 [Pipeline] // catchError 00:27:36.972 [Pipeline] stage 00:27:36.975 [Pipeline] { (Epilogue) 00:27:36.986 [Pipeline] sh 00:27:37.260 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:43.907 [Pipeline] catchError 00:27:43.909 [Pipeline] { 00:27:43.922 [Pipeline] sh 00:27:44.204 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:44.204 Artifacts sizes are good 00:27:44.214 [Pipeline] } 00:27:44.230 [Pipeline] // catchError 00:27:44.241 [Pipeline] archiveArtifacts 00:27:44.248 Archiving artifacts 00:27:44.375 [Pipeline] cleanWs 00:27:44.388 [WS-CLEANUP] Deleting project workspace... 00:27:44.388 [WS-CLEANUP] Deferred wipeout is used... 00:27:44.394 [WS-CLEANUP] done 00:27:44.395 [Pipeline] } 00:27:44.411 [Pipeline] // stage 00:27:44.416 [Pipeline] } 00:27:44.431 [Pipeline] // node 00:27:44.437 [Pipeline] End of Pipeline 00:27:44.474 Finished: SUCCESS